Sunteți pe pagina 1din 329

Microsoft Official Academic Course

Microsoft SQL Server Database


Design and Optimization, Exam
70443 and 70450

J. Steven Jones
David W. Tschanz
Dave Owen
Wayne R. Boyer

Credits
EXECUTIVE EDITOR
DIRECTOR OF SALES
EXECUTIVE MARKETING MANAGER
MICROSOFT SENIOR PRODUCT MANAGER
EDITORIAL PROGRAM ASSISTANT
PRODUCTION MANAGER
PRODUCTION EDITOR
CREATIVE DIRECTOR
COVER DESIGNER
TECHNOLOGY AND MEDIA

John Kane
Mitchell Beaton
Chris Ruel
Merrick Van Dongen of Microsoft Learning
Jennifer Lartz
Micheline Frederick
Kerry Weinstein
Harry Nolan
Jim OShea
Tom Kulesa/Wendy Ashenberg

This book was set in Garamond by Aptara, Inc. and printed and bound by Bind Rite Graphics.
The cover was printed by Phoenix Color.

Copyright 2010 by John Wiley & Sons, Inc. All rights reserved. No part of this publication may be reproduced,
stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying,
recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States
Copyright Act, without either the prior written permission of the Publisher, or authorization through payment
of the appropriate per-copy fee to the Copyright Clearance Center, Inc. 222 Rosewood Drive, Danvers, MA
01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the
Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030-5774, (201) 748-6011,
fax (201) 748-6008. To order books or for customer service, please call 1-800-CALL WILEY (225-5945).
Microsoft, ActiveX, Excel, InfoPath, Microsoft Press, MSDN, OneNote, Outlook, PivotChart, PivotTable,
PowerPoint, SharePoint, SQL Server, Visio, Windows, Windows Mobile, Windows Server, and Windows Vista are
either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.
Other product and company names mentioned herein may be the trademarks of their respective owners.
The example companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events
depicted herein are fictitious. No association with any real company, organization, product, domain name, e-mail
address, logo, person, place, or event is intended or should be inferred.
The book expresses the authors views and opinions. The information contained in this book is provided without
any express, statutory, or implied warranties. Neither the authors, John Wiley & Sons, Inc., Microsoft Corporation,
nor their resellers or distributors will be held liable for any damages caused or alleged to be caused either directly or
indirectly by this book.
Evaluation copies are provided to qualified academics and professionals for review purposes only, for use in their
courses during the next academic year. These copies are licensed and may not be sold or transferred to a third party.
Upon completion of the review period, please return the evaluation copy to Wiley. Return instructions and a free
of charge return shipping label are available at www.wiley.com/go/returnlabel. Outside of the United States, please
contact your local representative.
ISBN 978-0-470-18365-6
Printed in the United States of America
10 9 8 7 6 5 4 3 2 1

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

Foreword from the Publisher


Wileys publishing vision for the Microsoft Official Academic Course series is to provide
students and instructors with the skills and knowledge they need to use Microsoft technology
effectively in all aspects of their personal and professional lives. Quality instruction is required
to help both educators and students get the most from Microsofts software tools and to become
more productive. Thus our mission is to make our instructional programs trusted educational
companions for life.
To accomplish this mission, Wiley and Microsoft have partnered to develop the highest
quality educational programs for Information Workers, IT Professionals, and Developers.
Materials created by this partnership carry the brand name Microsoft Official Academic
Course, assuring instructors and students alike that the content of these textbooks is fully
endorsed by Microsoft, and that they provide the highest quality information and instruction
on Microsoft products. The Microsoft Official Academic Course textbooks are Official in
still one more waythey are the officially sanctioned courseware for Microsoft IT Academy
members.
The Microsoft Official Academic Course series focuses on workforce development. These
programs are aimed at those students seeking to enter the workforce, change jobs, or embark
on new careers as information workers, IT professionals, and developers. Microsoft Official
Academic Course programs address their needs by emphasizing authentic workplace scenarios
with an abundance of projects, exercises, cases, and assessments.
The Microsoft Official Academic Courses are mapped to Microsofts extensive research
and job-task analysis, the same research and analysis used to create the Microsoft Certified
Information Technology Professional (MCITP) exam. The textbooks focus on real skills for
real jobs. As students work through the projects and exercises in the textbooks they enhance
their level of knowledge and their ability to apply the latest Microsoft technology to everyday
tasks. These students also gain resume-building credentials that can assist them in finding a
job, keeping their current job, or in furthering their education.
The concept of life-long learning is today an utmost necessity. Job roles, and even whole job
categories, are changing so quickly that none of us can stay competitive and productive without
continuously updating our skills and capabilities. The Microsoft Official Academic Course
offerings, and their focus on Microsoft certification exam preparation, provide a means for
people to acquire and effectively update their skills and knowledge. Wiley supports students
in this endeavor through the development and distribution of these courses as Microsofts
official academic publisher.
Today educational publishing requires attention to providing quality print and robust
electronic content. By integrating Microsoft Official Academic Course products, WileyPLUS,
and Microsoft certifications, we are better able to deliver efficient learning solutions for
students and teachers alike.
Bonnie Lieberman
General Manager and Senior Vice President

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

| iii

Preface
Welcome to the Microsoft Official Academic Course (MOAC) program for Microsoft SQL
Server Database Design and Optimization. MOAC represents the collaboration between
Microsoft Learning and John Wiley & Sons, Inc. publishing company. Microsoft and Wiley
teamed up to produce a series of textbooks that deliver compelling and innovative teaching
solutions to instructors and superior learning experiences for students. Infused and informed
by in-depth knowledge from the creators of SQL Server, and crafted by a publisher known
worldwide for the pedagogical quality of its products, these textbooks maximize skills transfer
in minimum time. Students are challenged to reach their potential by using their new technical skills as highly productive members of the workforce.
Because this knowledgebase comes directly from Microsoft, architect of the SQL Server
operating system and creator of the Microsoft Certified IT Professional exams (www.microsoft.
com/learning/mcp/mcitp), you are sure to receive the topical coverage that is most relevant to
students personal and professional success. Microsofts direct participation not only assures
you that MOAC textbook content is accurate and current; it also means that students will
receive the best instruction possible to enable their success on certification exams and in the
workplace.

The Microsoft Official Academic Course Program

The Microsoft Official Academic Course series is a complete program for instructors and institutions to prepare and deliver great courses on Microsoft software technologies. With MOAC,
we recognize that, because of the rapid pace of change in the technology and curriculum developed
by Microsoft, there is an ongoing set of needs beyond classroom instruction tools for an
instructor to be ready to teach the course. The MOAC program endeavors to provide solutions
for all these needs in a systematic manner in order to ensure a successful and rewarding course
experience for both instructor and studenttechnical and curriculum training for instructor
readiness with new software releases; the software itself for student use at home for building
hands-on skills, assessment, and validation of skill development; and a great set of tools for
delivering instruction in the classroom and lab. All are important to the smooth delivery of an
interesting course on Microsoft software, and all are provided with the MOAC program. We
think about the model below as a gauge for ensuring that we completely support you in your
goal of teaching a great course. As you evaluate your instructional materials options, you may
wish to use the model for comparison purposes with available products.

iv |

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

Illustrated Book Tour

Pedagogical Features

The MOAC textbook for SQL Server Database Design and Optimization is designed to cover
all the learning objectives for that MCITP exam, which is referred to as its objective domain.
The Microsoft Certified Information Technology Professional (MCITP) exam objectives are
highlighted throughout the textbook. Many pedagogical features have been developed specifically for Microsoft Official Academic Course programs.
Presenting the extensive procedural information and technical concepts woven throughout
the textbook raises challenges for the student and instructor alike. The Illustrated Book Tour
that follows provides a guide to the rich features contributing to Microsoft Official Academic
Course programs pedagogical plan. Following is a list of key features in each lesson designed
to prepare students for success on the certification exams and in the workplace:
Each lesson begins with an Lesson Skill Matrix. More than a standard list of learning
objectives, the Domain Matrix correlates each software skill covered in the
lesson to the specific exam objective domain.
A Lab Manual accompanies this textbook package. The Lab Manual contains hands-on
lab work corresponding to each of the lessons within the textbook. Numbered steps
give detailed, step-by-step instructions to help students learn workplace skills associated with database design and optimization. The labs are constructed using
real-world scenarios to mimic the tasks students will see in the workplace.
Illustrations: Screen images provide visual feedback as students work through the
exercises. The images reinforce key concepts, provide visual clues about the steps, and
allow students to check their progress.
Key Terms: Important technical vocabulary is listed at the beginning of the lesson.
When these terms are used later in the lesson, they appear in bold italic type and are
defined. The Glossary contains all of the key terms and their definitions.
Engaging point-of-use Reader aids, located throughout the lessons, tell students why
this topic is relevant (The Bottom Line), provide students with helpful hints (Take Note),
or show alternate ways to accomplish tasks (Another Way). Reader aids also provide
additional relevant or background information that adds value to the lesson.
Certification Ready? features throughout the text signal students where a specific
certification objective is covered. They provide students with a chance to check their
understanding of that particular exam objective and, if necessary, review the section
of the lesson where it is covered.
Knowledge Assessments provide lesson-ending activities.

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

| v

vi | Illustrated Book Tour

Lesson Features

Analyzing and
Designing Security

L ES S O N

L E S S O N S K I L L M AT R I X
TECHNOLOGY SKILL

70-443 EXAM OBJECTIVE

Analyze business requirements.

Foundational

Gather business and regulatory requirements.

Foundational

Decide how requirements will impact choices at various security levels.

Foundational

Evaluate costs and benefits of security choices.

Foundational

Decide on appropriate security recommendations.

Foundational

Inform business decision makers about security recommendations


and their impact.

Foundational

Incorporate feedback from business decision makers into a design.

Foundational

Integrate database security with enterprise-level authentication systems.

Foundational

Decide which authentication system to use.

Foundational

Design Active Directory organizational units (OUs) to implement serverlevel security policies.

Foundational

Ascertain the impact of authentication on a high-availability solution.

Foundational

Establish the consumption of enterprise authentication.

Foundational

Ascertain the impact of enterprise authentication on service uptime requirements.

Foundational

Modify the security design based on the impact of network security policies.

Foundational

Analyze the risk of attacks to the server environment and specify mitigations.

Foundational

Lesson Skill Matrix

Certification Ready alert

92 | Lesson 4

CERTIFICATION READY?
Be prepared for exam
questions giving you
choices on conflicting
requirements. Pay
attention to stated
objectives and their
importance.

KEY TERMS
audit: An independent verification
of truth.
active directory (AD): The operating systems directory service
that contains references to all
objects on the network. Examples
include printers, fax machines,
user names, user passwords,

domains, organizational units,


computers, etc.
organizational unit: An object
within Active Directory that
may contain other objects
such as other organizational
units (OUs), users, groups,
computers, etc.

security policy: The written


guidelines to be followed by all
employees of the enterprise to
protect data and resources from
unintended consequences. A
security policy, for example, should
exist guiding all users on how to
protect their network password.

Key Terms

WARNING If you arent


a corporate officer, then you are
somewhat shielded from legal
responsibilitiesbut you arent
completely absolved of responsibility if you dont meet regulations.
Losing your job is one thing; going
to jail is something else entirely.

Warning!

Warning
Reader Aid

You should make decisions yourself as much as possible; but when youre faced with mandates or directives that conflict with one another, you need to seek resolution from those in
charge of the companyespecially if the decision is made to stray from regulatory guidelines.
Company leaders often have a working relationship with standards bodies or governmental
offices and can adapt the requirements to meet your companys needs.
If youre forced to choose between conflicting requirements yourself, understand the implications of ignoring any particular set of rules. In making your decision, you should meet
all requirements to the greatest extent possible, but understand that governmental regulations usually are more important than corporate or certification ones. Penalties for ignoring
requirements that have been written into law or codified by a governmental office can be
financial woe for your company and may result in incarceration.
If youre choosing between your corporate mandates and the guidelines of a standards body or certification (such as ISO 9000), you should follow your corporate mandates. This is a general guideline; make sure you have the permission of your companys executives to proceed in this manner.

Analyzing the Cost of Requirements


Not all requirements you gather will be implemented on your SQL servers. Regulatory
and mandatory requirements will be adhered to, but there may be requirements that the
business would like to impose but chooses not to for cost reasons.

87

Every security decision you make has a cost. It isnt necessarily a monetary cost, such as the
purchase of a piece of auditing software. It can be a cost in terms of time (RSA 2048-bit
encryption takes too long to complete with current technology), in terms of effort (requiring
two-factor authentication will result in too many errors from users), or in terms of another
resource. As the designer for your SQL Server infrastructure, you need to weigh the costs and
benefits of each decision to determine whether its worth pursuing.
Financial costs are simple to determine via price quotes from vendors and suppliers, licensing
costs based on existing installations or user counts, and so on. You can generally gather this
information easily and use it to determine the amount of money that your company must
spend for security items. Make sure to assign these direct dollar costs to each particular item.
Nonfinancial costs are difficult to establish, and youll have to decide how your company will
assign the value of those costs. You need to allocate a value in dollars (or some other currency)
so that you have a method of measuring these expenses along with other costs. You can do this
in a number of ways, almost all of which require that you consult with the people and departments that will be affected to gather information about the impact from a particular decision.

Another Way Reader Aid

Time is an easy cost to determine. Often, the time an event takes can be translated into an
expense based on the cost of the resources involved. Each employee has a cost that can be
divided out to determine the per-minute value of his or her time. Security decisions often
impose a burden on people that equates to time spent on some activity, so its relatively simple
to determine the security cost of a particular decision.

4 | Lesson 1

ANOTHER WAY

Use the stored procedure sp_configure with


show advanced option
to display the current
settings. SQL Server
Configuration Manager
can help you collect network configuration data,
such as the libraries,
protocols, and ports for
each instance.

REF

Lesson 2 discusses disk


subsystems and physical
storage design considerations in more detail.

Record SQL Server configuration settings. Record the minimum and maximum memory
settings, the CPUs used by the instance, and the default connection properties for each
SQL Server instance.
Review the configuration management process for proposing, approving, and implementing configuration changes, and identify opportunities to make the process more
efficient. What tools are used?
Assess the quality of the database server documentation.
Verify the capabilities of disk subsystems and physical storage. Determine whether
the RAID levels of existing disk subsystems support data availability and performance
requirements.
Determine the locations of transaction log files and data files.
Examine the use of filegroups.
Are adequate data-file sizes allocated to databases, including the tempdb database?
Verify that the AutoShrink property is set to False, to ensure that the OS files
maintaining table and index data are resized accordingly.
Determine whether disk-maintenance operations, such as defragmentation, are
performed periodically.
Assess Event Viewer errors to identify disk storage-related problems.

TAKE NOTE

LAB EXERCISE

Perform the exercise in your lab


manual.

When you examine the cost of time, include all the people involved. For example, a password change resulting from a security decision to expire passwords results in the use of the
time of at least two people: the person deciding whose password must be changed and the
person making the change.
Other costs, such as increased time for customers or clients to use your system, their desire
or ability to work with your system, or even potential costs for others to integrate with you,
must be estimated by someone in your organization. The sales department may need to examine your requirements and determine the opportunity cost of a decision on the companys
overall ability to generate revenue.
In Exercise 4.1, youll determine the time cost of resetting passwords.

Accommodating Changing Capacity Requirements


Requirements analysis is key to the process of designing modifications to a database server
infrastructure. Just as you need to know the purpose of a house in order to build one that
meets your needs, you must properly identify the business requirements in order to design
your infrastructure. Otherwise, your design wont meet the needs of the organization; and
not only can you forget professional pride, youll be lucky if you still have a job.
Its essential that you always work in a collaborative way with company management, IT staff,
and end users to identify both the technical and business requirements that the new database
infrastructure is expected to support.
There is an intricate dance between the technical aspects and the nontechnical aspects of a
project, and weaving them together seamlessly is one of your most important, if never really
specified, jobs. Technical aspects and requirements typically focus on tasks such as capacity,
archiving, distribution, and other needs. These are affected by business requirements that
include budgetary and legal restrictions, company IT policies, and so on. Successful comprehension of both sets of requirements allows you to know precisely the scope of modifications
to the infrastructure and establishes a valuable foundation on which to base your design and
modification decisions.
When designing modifications to a database server infrastructure, you must consider its
future capacity needs based on the projected business growth of the organization. In addition,
you must consider requirements pertaining to data archiving, database server consolidation,
and data distribution.

CONSIDERING TECHNICAL REQUIREMENTS


The rest of this lesson introduces specific capacity needs, usually when talking about a specific
server. Its crucial to a successful design that you analyze and identify the capacity requirements of the database server infrastructure as a whole.
Because its difficult to extrapolate the capacity needs of the entire infrastructure, you may not
always be able to project growth except in qualitative and general terms. You should, nonetheless, answer these questions for your future planning estimates and projections:

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

Illustrated Book Tour | vii

Analyzing and Designing Security | 89

All logins must be mapped to Active Directory accounts.


Customer Social Security numbers must be encrypted as per government regulations.
All data access to the medical database must be audited.
Only bonded individuals can be assigned system administrator privileges as per insurance guidelines.

After you gather the requirements from all sources, be sure to document any existing security
settings on your SQL Servers. These may or may not be in conflict with the requirements,
but in designing a security plan, you should consider the current environment. Have
mitigation plans handy for any changes to be sure that the databases remain available and
functional to users.
Before examining how youll use these requirements, you must understand the security scope
in SQL Server.

Case Study: Gathering Requirements


Designing the Hardware and Software Infrastructure | 21

Youve been assigned the task of architecting a new infrastructure for the SQL Server
2005 upgrade at a U.S. pharmaceutical company. To ensure that your design complies
with all applicable requirements, you schedule interviews with the chief operations officer
and his staff as well as the senior researchers.

SQL SERVER 2005


There are five different editions of SQL Server 2005: Microsoft SQL Server 2005 Enterprise/
Developer/Evaluation edition, Microsoft SQL Server 2005 Standard edition, Microsoft
SQL Server 2005 Workgroup edition, Microsoft SQL Server 2005 Developer edition, and
Microsoft SQL Server 2005 Express edition/Express edition with Advanced Services. The
most common editions used are Enterprise, Standard, and Express, because these editions fit
the requirements and product pricing needed in production server environments:
SQL Server 2005 Enterprise edition (32-bit and 64-bit). This edition comes in both
32-bit and 64-bit varieties. This is the ideal choice if you need a SQL Server 2005
edition that can scale to near limitless size while supporting enterprise-sized On-Line
Transaction Processing (OLTP), highly complex data analysis, data-warehousing systems,
and Web sites.
Enterprise edition has all the bells and whistles and is suited to provide comprehensive
business intelligence and analytics capabilities. It includes high-availability features such
as failover clustering and database mirroring. Its ideal for large organizations or situations
with the need for a version of SQL Server 2005 that can handle complex situations.
SQL Server 2005 Standard edition (32-bit and 64-bit). Standard includes the
essential functionality needed for e-commerce, data warehousing, and line-of-business
solutions but does not include some advanced features such as Advanced Data
Transforms, Data-Driven Subscriptions, and DataFlow Integration using Integration
Services. The Standard edition is best suited for the small- to medium-sized organization
that needs a complete data-management and analysis platform without many of the
advanced features found in the Enterprise edition.
SQL Server 2005 Workgroup edition (32-bit only). Workgroup edition is the data
management solution for small organizations that need a database with no limits on size
or number of users. It includes only the core database features of the product line (it
doesnt include Analysis Services or Integration Services, for example). Its intended as
an entry-level, easy-to-manage database.
SQL Server 2005 Developer edition (32-bit and 64-bit). Developer edition has all
the features of Enterprise edition, but its licensed only for use as a development and test
system, not as a production server. This edition is a good choice for persons or organizations that build and test applications but dont want to pay for Enterprise edition.
SQL Server 2005 Express edition (32-bit only). SQL Server Express is a free,
easy-to-use, simple-to-manage database without many of the features of other editions
(such as Notification Services, Analysis Services, Integration Services, and Report
Builder). SQL Server Express is free and can function as the client database as well as a
basic server database. Its a good option if all thats needed is a stripped-down version of
SQL Server 2005. Express is used typically among low-end server users, nonprofessional
developers building web applications, and hobbyists building client applications.

Youre informed that you must adhere to a number of requirements: 10CFR15 as mandated
by the U.S. government, Sarbanes-Oxley requirements for the company as a publicly held
entity, and various insurance requirements to ensure worker and customer safety.
The process of complying with these regulations means you must validate every security
decision against all the different requirements. An internal committee of employees will
check your plans compliance when youve completed it.
Once youve made the necessary decisions, you need to ensure that a representative from
each body whose requirements youre meeting audits the plan and documents compliance
with or deviation from each of their requirements.

Understanding Security Scope

REF

External Windows
serverlevel security will
be dealt with in Lesson 5
and internal server
instance and database
security in Lesson 6.

In SQL Server, security is applied at various levels, each encompassing a different scope on
which it applies. Security can be applied at the server level, the database level, and the schema
level. This Lesson will examine overall security system design for the entire enterprise.
Figure 4-1 shows the hierarchy of a SQL Server. The highest level is the server instance, which
contains one or more databases. Each database has its own users, which are mapped to server
instance level logins. Database security applies to the database container as well as all objects
within that database. Outside of the SQL Server are the Windows server and enterprise-level
security structures.
SQL Server has a four-part set of security levels: server, database, schema, and object. The
schema level was introduced with SQL Server 2005. A schema is essentially a container of
objects within a database; a single database can include multiple schemas. SQL Server 2000
blended the objects owner and a schema to form a multipart naming system for objects. Thus
dbo. TestTable and Steve. TestTable were two different objects. However, the owner, Steve in
this case, was bound to the objects, and it was cumbersome to remove the user Steve.

X Ref Reader Aid


SQL Server 2008
Information

SQL SERVER 2008


With SQL Server 2008, Microsoft is now bundling both the 32-bit and 64-bit software
together with one license for all editions of the product. This in itself is a significant
change SQL Server 2005, where you could only purchase or obtain 32-bit software for the
Workgroup and Express editions. Also a new Web edition has been created for Internet web
server usage. The 2008 editions of SQL Server are:
SQL Server 2008 Enterprise edition. The Enterprise edition includes all features in
SQL Server 2008. The following features are only available in this edition of SQL Server
2008 (plus Developer and Evaluation editions as they are simply restricted license versions of Enterprise):
Data Compression
Extensible Key Management
Hot Add CPUs, RAM
Resource Governor

126 | Lesson 5

Knowledge Assessment
Case Study
The Ever-Growing Wealth Company
The Ever-Growing Wealth Company manages retirement funds for many people
and is concerned about the security of its data. To ensure that its database servers are
adequately protected, the company decides to review and revamp its security policies.

End-of-lesson Case Studies

Planned Changes
The companys management thinks the security policies for its applications must be
strengthened and that encryption needs to be deployed. However, these changes cant
cause problems in the event that disaster-recovery plans must be implemented.

Existing Data Environment


The company currently has two SQL Servers that separately support two different
applications. A third SQL Server receives copies of all backups immediately after theyre
completed and is available in the event of a disaster. One of these, called SQLWeb,
supports the company Web site on the Internet. The other, SQLTrading, supports the
portfolio management and trading application.
SSIS is expected to be used to move some data between these two servers.

Existing Infrastructure
All these servers are stored in the companys data center, which is a climate-controlled,
converted office in the companys current location. The company would like to move all
its servers to a co-location facility with a dedicated network connection back to the office.
Currently, a tegwc.com domain contains two main organizational units (OU), one for
the internal employees and one for any client accounts.
The two SQL Servers are named instances that use dynamic ports. A firewall protects
the entire network, but all servers exist in a flat Ethernet topology as shown in the Case
Exhibit of this case study.

Business Requirements
The clients of Ever-Growing Wealth expect to be able to access their data at any time
of the day or night. The existing disaster-recovery plan allows system administrators a
five-minute response time to failover the SQL Servers, and this is deemed acceptable.
However, it cant take more time than this to get the application running.
The company expects that regulatory requirements will be enacted soon for all financial
companies, so the strongest encryption possible is preferred, balancing the performance
of the servers. Newer hardware is available to make up for any issues from the implementation of encryption.

Technical Requirements
For the new servers, the company purchased the next generation of hardware to allow
for the additional load of encrypting data. However, complete encryption of all data
using asymmetric keys will likely overload these servers; therefore, the security policy
must work within these hardware constraints.
Each instance has a SQL Server Agent service that performs various functions,
including copying backup files to another server and running business maintenance
jobs that access the mail server.

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

viii | Illustrated Book Tour

88 | Lesson 4

In SQL Server 2000, some key security templates made security cumbersome and resulted
in workarounds that often didnt meet users requirements. As a result, one of the key
design considerations with SQL Server 2005 was an increased level of security for the
server. SQL Server now not only includes more control and capabilities but also makes it
easier for the DBA to administer the security policies for the server.

This Lesson will examine the methods and reasoning behind designing an effective databaselevel security policy for your SQL Server instances.

Gathering Your Security Requirements

THE BOTTOM LINE

Before you can develop an effective security policy, you must understand the requirements
that your plan must meet. These include requirements dictated by your business as well
as any regulatory requirements imposed on your business by governmental or regulatory
agencies. Your plan must cover both of these types, and you must resolve any conflicts
between the two based on your situation.

The requirements imposed on your SQL Servers by the business will in all likelihood be easier
to meet (in other words, they will be less restrictive) but will probably be harder to ascertain.
When someone in business decides on a requirement for an application, that requirement
may or may not be documented thoroughly, which can cause you difficulties during planning.
Youll spend much of this part of the design process interviewing executives, business liaisons,
stakeholders in each application, developers, administrators, and anyone else who may know
why an application has a security need.

The Bottom Line


Reader Aid
Designing a SQL Server Solution for High Availability | 217

The regulatory requirements, conversely, should be easy to determine. A business IT liaison


should be able to let you know which governmental regulations apply. Once you know the
applicable laws or codes, you can look them up from the appropriate agencys offices or Web
site and incorporate them into your documentation.

TAKE NOTE

WARNING Make sure you


know the exact details of the
requirementsand dont rely on a
summary from a source other than
the regulatory agency. A digest or
guideline from another source can
help you understand the rules, but
your security decisions must satisfy
the original requirements.

Warning!

If SQLProd01 fails for some reason, SQLProd02 will become the primary server after failover
and start responding to client requests. Only one servers resources are used at a time, meaning that half your server hardware (excluding disk drives) isnt being used at any given time.
In this case, only one SQL Server license is needed for the one virtual server.
A second example, illustrated in Figure 10-2, shows a three-node, active/active cluster with three
physical servers and three virtual servers. In this case, each server is actively used at all times to
do work, and three SQL Server licenses are required for the three active server instances.

As you gather this information, document it carefully. You may want to segregate the
data by server instance and database for ease of locating it later. Youll use the various
requirements to design the security policy for your SQL Server.

The failover strategy is more complex in this example, with each server having a designated
failover server in a round-robin fashion. Table 10-2 shows the virtual servers, primary physical
instance, and the failover physical instance.

In addition to regulatory or governmental requirements, you may be subject to requirements


from industry groups, standards bodies, or even insurers. Each certifying, regulating, or
industry-related company that interacts with your organization may have its own set of rules
and regulations.
Often these governmental rules require different consideration than the rules that are established for the rest of your enterprise. Regulatory rules exist to meet governmental standards
or rules, while your enterprise will have developed rules to meet its own goals. If possible, it
helps to conform all your servers to the same set of rules. This makes it easier for everyone to
both administer the servers and understand the way each server works. This may not be possible for some applications that have conflicting requirements. For example, your accounting
systems may be bound by requirements for auditing that are mutually exclusive from other
systems that require a high degree of privacy for the data. The following are a few example
requirements:

Figure 10-2
Three-node active/passive
clustering

Client

Informative
Diagrams

SQLProdA
SQLProd02

SQLProd01

Shared Disk

SQLProdC

SQLProdB

Client

Client
SQLProd03

Table 10-2

Easy-to-Read
Tables

Three-node failover

V IRTUAL S ERVER

P RIMARY S ERVER

S ECONDARY S ERVER

SQLProdA

SQLProd01

SQLProd02

SQLProdB

SQLProd02

SQLProd03

SQLProdC

SQLProd03

SQLProd01

If any node fails, then the virtual server moves to another instance. However, when this
occurs, one physical server will be spreading its resources to serve two virtual instances. In
this example, if SQLProd02 fails, then SQLProd03 must serve clients connecting to both
SQLProdC and SQLProdB.
In order for the applications to function at a similar performance level, each server must have
enough spare processor cycles and memory to handle the additional load of a second instance
in the event of a failover.

Designing Windows Server-Level Security | 125

Physically Securing Your Servers

CERTIFICATION READY?
When examining
security, be sure you
grasp the breadth and
depth of this topic. Do
you understand how
authentications, physical
barriers, firewalls,
disaster recovery plans,
business recovery plans,
risk analyses, policies,
enforcement, incident
response plans, and
forensic investigations all
interact?

Every server that you have running in your enterprise should be physically secured from
unauthorized access. There are many ways of enforcing security and protecting your
server through software, but most of these can be circumvented if the server can be
physically accessed or attacked. The local file system security can be bypassed if someone
can boot a server from another source, and this can lead to security-related files or data
files being copied and the data compromised.
SQL Servers are no exception. But because they can be easily set up on many platforms and
are used in testing new solutions, sometimes the servers physical security isnt maintained as
theyre moved to an employees office or cubicle.
If youre storing enterprise data on a SQL Server, the server should be stored in a physically
secure manner. This means behind a locked door with a limited number of people able to
access the machine. Access controls that log and control which individuals can access the
room are preferred; theyre even mandated in some environments.
SQL Servers often have large disk subsystems, so be sure the disks are secured to prevent their
physical theft. Due to the large data sets, tape backup systems are often used. Make sure physical control over these tapes is maintained and they arent allowed to sit on a desk or other
unsecured area where unauthorized people have access to them.

S K I L L S U M M A RY
This Lesson has investigated how to design Windows server-level security. The server-level
policies provide the highest level of security for SQL Server. Your password and encryption
policies should provide the level of security you need, balanced with the performance required
on your server. The services, service account, and firewall policies should be set to the
absolute minimums required for each server. Enabling all services or opening all possible ports
increases the surface area available for attack on your server unnecessarily. Configure and
make available those items only when you need them, and disable them when theyre no
longer needed.
Security is an ongoing process and should evolve as your server changes. Developing policies
and procedures that make the least amount of resources available from a security perspective
will help to ensure that youre protected and that your server functions in an optimum
manner at all times.

Summary Skill Matrix

For the certification examination:


Understand the SQL Server password policy. You should know the options for password
policies in SQL Server and the impact of each one.
Understand the different SQL Server encryption options. You should know how encryption
is configured at the server level in SQL Server.
Know how to properly configure a service account. SQL Server has different sections that
require service accounts, and you need to know how they should be configured.
Understand how antivirus software interacts with SQL Server. You should be able to
configure antivirus software to coexist with a SQL Server instance.
Know how to enable and disable services. SQL Server consists of multiple services, and
you should understand how and why to enable or disable them.
Understand how server-level firewalls interact with SQL Server. A server-level firewall is a
software service that runs alongside a SQL Server instance. Understand how these interact
and how they should be configured.

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

Illustrated Book Tour | ix

156 | Lesson 7

EXECUTE AS SELF. This context is similar to the EXECUTE AS <user_name> context,


but it uses the context of the user creating or altering the module. SELF, in this case,
applies to the user that is executing the CREATE or ALTER statement on the module.
As an example, Steve is creating the NewSchema.MyProcedure stored procedure. The
code is as follows:
CREATE PROCEDURE NewSchema.MyProcedure
WITH EXECUTE AS SELF
AS
SELECT * FROM Steve.MyTable

Steve then grants Dean permission to execute this stored procedure. When Dean executes it, permissions are checked to be sure he can execute the module, but the permissions check on Steve.MyTable uses Steves permission set.
EXECUTE AS OWNER. This context uses the permission set of the module owner for
all objects referenced in the module. If the module doesnt have an owner, the owner of
the modules schema is used instead.
This is similar to EXECUTE AS SELF if the person creating the module is the same as
the owner at execution time. However, because object ownership can be changed, this
context allows the permission check to move to the new owner of the object.

CERTIFICATION READY?
Know the forms of the
EXECUTE AS command
and be prepared to
identify how the use of
this command would
alter the execution
context.

TAKE NOTE

In the three cases where execution context is set to a particular username, that user cant be
dropped until the execution context is changed.

Take Note Reader Aid

Case Study: Developing an EXECUTE AS Policy for an Object


These are all powerful features that allow you to temporarily assign a different set of
permissions to a user by allowing them to execute a module. These permissions dont
carry throughfor example, executing a permission on a module calling the Sales table
doesnt grant permission to access the Sales table.
This limitation is useful when you want to let users access cross-schema objects, but you
dont want to grant them explicit rights. Just as with schemas, implications exist that
can cause issues in administering security.

In-lesson Case Study

Because users tend to change more often than permissions or objects, you use techniques that allow for this flexibility. In assigning permissions, you use groups and roles
to collect users together for easy administration. Starting with the 2005 version of SQL
Server the concept of a schema has been available. The schema separates object ownership from individual users for the same reason. And this should caution you against
using a particular user or SELF to change execution context: Because a one-to-one
mapping exists between the user and a module, if the user needs to be dropped, every
module must be altered to change the execution context. This is the same administrative
issue with users both owning an object and being its schema.

Knowledge Assessment

Instead, if you need to grant temporary permissions, the EXECUTE AS OWNER


statement is the best choice if the permissions for the owner are set appropriately for the
referenced objects. However, this can still cause issues if the administrator doesnt want
an objects owner to have the extended permissions.
The best policy you can implement is to create specific users that are in a role expressly
created to meet your permissions needs. These users shouldnt map to a user login, but
rather should exist only to execute the modules requiring special permissions.

206 | Lesson 9

If you think your environment is static enough to use individual users, then EXECUTE
AS is a good way to change permissions in only one module.

Knowledge Assessment
Multiple Choice
Circle the letter or letters that correspond to the best answer or answers.
1. Which of the following are benefits of having database naming conventions? (Choose all
that apply. )
a. Provides a method to organize infrastructure
b. Reduces the learning curve for new database administrators
c. Makes coding easier
d. All of the above
2. Which of the following are the most important attributes of a naming convention?
(Choose all that apply.)
a. Flexibility
b. Regulatory requirements
c. Consistency
d. Size of the organization

Designing Physical Storage | 53

Manage services. You can use Configuration Manager to start, pause, resume, or stop services, to view service properties, or to change service properties. As you can see in Figure 2-3,
Configuration Manager gives you easy access to SQL Server Services.

3. Which of the following database objects should have a naming convention? (Choose all
that apply.)
a. Database
b. Table
c. Trigger
d. Index

Change the accounts used by services. You should always use SQL Server tools, such
as SQL Server Configuration Manager, to change the account used by the SQL Server or
SQL Server Agent services, or to change the password for the account. You can also use
Configuration Manager to set permissions in the Windows Registry so that the new account
can read the SQL Server settings.

4. Which of the following practices should not be followed?


a. Prefixing a view with vw_
b. Prefixing a stored procedure with sp_
c. Using prefixes with schema
d. Using the prefix ufn to define a user-defined function

Manage server network and client protocols. SQL Server 2005 supports Shared Memory,
TCP/IP, Named Pipes, and VIA protocols. You can use Configuration Manager to configure server and client network protocols and connectivity options. After the correct protocols
are enabled using the Surface Area Configuration tool, you usually dont need to change the
server network connections. However, you can use SQL Server Configuration Manager if you
need to reconfigure the server connections so that SQL Server listens on a particular network
protocol, port, or pipe.

5. Which of the following are good naming practices for indexes? (Choose all that apply.)
a. Combine the name of the table and the names of the columns.
b. Specify whether the index is clustered or nonclustered.
c. Include a prefix such as IX_.
d. Use spaces to separate key elements.

You can use Configuration Manager to perform the following tasks:

6. When you have an existing database with poorly named objects that cannot be renamed,
what is the best way improve clarity of the naming conventions?
a. Use a lookup table.
b. Create a new column.
c. Note in your standards documentation what the poorly named object actually
represents.
d. Use a synonym.

Assign TCP ports to instances. If instances must listen through TCP ports, you should
explicitly assign private port numbers. Otherwise, the port numbers are dynamically assigned.
You can use the SQL Server Configuration Manager to assign port numbers. Although you
can change port numbers that are dynamically assigned, client applications that are set up to
use these port numbers may be adversely affected.

TAKE NOTE

7. Which of the following is not a bad practice for naming conventions?


a. Using the sp_ prefix in user-defined stored procedure names
b. Inconsistent use of uppercase and lowercase letters
c. Using numbers in the name
d. Using reserved words for object names

When youre assigning ports, make sure they dont conflict with port numbers that are
already reserved by software vendors. To determine which port numbers are available,
visit the Internet Assigned Numbers Authority (IANA) Web site at the following URL:
www.iana.org/assignments/port-numbers.

8. Which of the following are not recommended names for tables in a SQL Server
database? (Choose all that apply.)
a. Person.Address
b. Person.Address Type
c. tbl_Person.AddressType
d. dbo.MSmerge_history

Figure 2-3
SQL Server Configuration
Manager is the preferred tool
to manage many aspects of
SQL Server instance configurations, including services.

Screen Images

LAB EXERCISE

Perform the exercise in your lab


manual.

In Exercise 2.5, youll learn how to use the Configuration Manager.

Lab Exercise callout

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

Conventions and Features


Used in This Book
This book uses particular fonts, symbols, and heading conventions to highlight important
information or to call your attention to special steps. For more information about the features
in each lesson, refer to the Illustrated Book Tour section.

C ONVENTION

M EANING

THE BOTTOM LINE

This feature provides a brief summary of the material


to be covered in the section that follows.

CERTIFICATION READY?

TAKE NOTE

ANOTHER WAY

REF

Reader aids appear in shaded boxes found in your text. Take


Note provides helpful hints related to particular tasks or topics.
Another Way provides an alternative procedure for accomplishing a particular task.
These notes provide pointers to information discussed
elsewhere in the textbook or describe interesting features
of SQL Server that are not directly addressed in the current
topic or exercise.

A shared printer can be


used by many individuals
on a network.

Key terms appear in bold italic.

Key My Name is.

Any text you are asked to key appears in color.

Click OK.

Any button on the screen you are supposed to click on or


select will also appear in color.

SQL Server 2008.

Information that is particular to the 2008 version of SQL


Server is shown in color.

LAB EXERCISE

x |

This feature signals the point in the text where a specific


certification objective is covered. It provides you with a
chance to check your understanding of that particular MCITP
objective and, if necessary, review the section of the lesson
where it is covered.

The Lab Exercise feature shows where a corresponding


hands-on exercise is available in the companion
Lab Manual.

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

Instructor Support Program

The Microsoft Official Academic Course programs are accompanied by a rich array of resources
that incorporate the extensive textbook visuals to form a pedagogically cohesive package.
These resources provide all the materials instructors need to deploy and deliver their courses.
Resources available online for download include:
The MSDN Academic Alliance is designed to provide the easiest and most inexpensive
developer tools, products, and technologies available to faculty and students in labs,
classrooms, and on student PCs. A free 3-year membership is available to qualified
MOAC adopters.
Note: Microsoft SQL Server can be downloaded from MSDN AA for use by students in
this course
The Instructors Guide contains Solutions to all the textbook exercises as well as chapter
summaries and lecture notes. The Instructors Guide and Syllabi for various term lengths
are available from the Book Companion site (www.wiley.com/college/microsoft).
The Test Bank contains hundreds of questions in multiple-choice, true-false, short answer,
and essay formats and is available to download from the Instructors Book Companion site
(www.wiley.com/college/microsoft). A complete answer key is provided.
PowerPoint Presentations and Images. A complete set of PowerPoint presentations is
available on the Instructors Book Companion site (www.wiley.com/college/microsoft) to
enhance classroom presentations. Tailored to the texts topical coverage and Skills Matrix,
these presentations are designed to convey key Microsoft SQL Server concepts addressed
in the text.
All figures from the text are on the Instructors Book Companion site (www.wiley.com/
college/microsoft). You can incorporate them into your PowerPoint presentations, or create
your own overhead transparencies and handouts.
By using these visuals in class discussions, you can help focus students attention on key
elements of Windows Server and help them understand how to use it effectively in the
workplace.
When it comes to improving the classroom experience, there is no better source of ideas
and inspiration than your fellow colleagues. The Wiley Faculty Network connects teachers
with technology, facilitates the exchange of best practices, and helps to enhance instructional
efficiency and effectiveness. Faculty Network activities include technology training and
tutorials, virtual seminars, peer-to-peer exchanges of experiences and ideas, personal
consulting, and sharing of resources. For details visit www.WhereFacultyConnect.com.
Microsoft SQL Server Books Online. This set of online documentation helps you understand SQL Server and how to implement data management and business intelligence projects.
SQL Server Books Online is referred to throughout this text as a valuable supplement to your
work with SQL Server. You can find SQL Server Books Online at http://msdn.microsoft.
com/en-us/library/ms130214(SQL.90).aspx and http://msdn.microsoft.com/en-us/library/
ms130214.aspx.

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

| xi

xii | Instructor Support Program

MSDN ACADEMIC ALLIANCEFREE 3-YEAR MEMBERSHIP


AVAILABLE TO QUALIFIED ADOPTERS!
The Microsoft Developer Network Academic Alliance (MSDN AA) is designed to provide
the easiest and most inexpensive way for universities to make the latest Microsoft developer
tools, products, and technologies available in labs, classrooms, and on student PCs. MSDN
AA is an annual membership program for departments teaching Science, Technology,
Engineering, and Mathematics (STEM) courses. The membership provides a complete
solution to keep academic labs, faculty, and students on the leading edge of technology.
Software available in the MSDN AA program is provided at no charge to adopting
departments through the Wiley and Microsoft publishing partnership.
As a bonus to this free offer, faculty will be introduced to Microsofts Faculty
Connection and Academic Resource Center. It takes time and preparation to keep
students engaged while giving them a fundamental understanding of theory, and the
Microsoft Faculty Connection is designed to help STEM professors with this preparation by providing articles, curriculum, and tools that professors can use to engage and
inspire todays technology students.
Contact your Wiley rep for details.
For more information about the MSDN Academic Alliance program, go to:
msdn.microsoft.com/academic/
Note: Microsoft SQL Server can be downloaded from MSDN AA for use by students in
this course.

Important Web Addresses and Phone Numbers


To locate the Wiley Higher Education Rep in your area, go to the following Web address
and click on the Whos My Rep? link at the top of the page.
www.wiley.com/college
Or Call the MOAC Toll Free Number: 1 + (888) 764-7001 (U.S. & Canada only).
To learn more about becoming a Microsoft Certified Professional and exam availability, visit
www.microsoft.com/learning/mcp.

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

Student Support Program

Book Companion Web Site (www.wiley.com/college/microsoft)


The students book companion site for the MOAC series includes any resources, exercise files,
and Web links that will be used in conjunction with this course.

Wiley Desktop Editions


Wiley MOAC Desktop Editions are innovative, electronic versions of printed textbooks.
Students buy the desktop version for 50% off the U.S. price of the printed text, and get the
added value of permanence and portability. Wiley Desktop Editions provide students with
numerous additional benefits that are not available with other e-text solutions.
Wiley Desktop Editions are NOT subscriptions; students download the Wiley Desktop Edition
to their computer desktops. Students own the content they buy to keep for as long as they want.
Once a Wiley Desktop Edition is downloaded to the computer desktop, students have instant
access to all of the content without being online. Students can also print out the sections they
prefer to read in hard copy. Students also have access to fully integrated resources within their
Wiley Desktop Edition. From highlighting their e-text to taking and sharing notes, students can
easily personalize their Wiley Desktop Edition as they are reading or following along in class.

Microsoft SQL Server Software


As an adopter of a MOAC textbook, your schools department is eligible for a free three-year
membership to the MSDN Academic Alliance (MSDN AA). Through MSDN AA, full versions
of Microsoft SQL Server are available for your use with this course. See your Wiley rep for details.

Preparing to Take the Microsoft Certified Information


Technology Professional (MCITP) Exam
The Microsoft Certified Information Technology Professional (MCITP) certifications enable
professionals to target specific technologies and to distinguish themselves by demonstrating
in-depth knowledge and expertise in their specialized technologies. Microsoft Certified
Information Technology Professionals are consistently capable of implementing, building,
troubleshooting, and debugging a particular Microsoft Technology.
For organizations the new generation of Microsoft certifications provides better skills verification tools that help with assessing not only in-demand skills on Windows Server, but also
the ability to quickly complete on-the-job tasks. Individuals will find it easier to identify and
work toward the certification credential that meets their personal and professional goals.
To learn more about becoming a Microsoft Certified Professional and exam availability, visit
www.microsoft.com/learning/mcp.

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

| xiii

xiv | Student Support Program

Microsoft Certifications for IT Professionals


The new Microsoft Certified Technology Specialist (MCTS) and Microsoft Certified IT
Professional (MCITP) credentials provide IT professionals with a simpler and more targeted
framework to showcase their technical skills in addition to the skills that are required for specific developer job roles.
The Microsoft Certified Professional (MCP), Microsoft Certified System Administrator
(MCSA), and Microsoft Certified Systems Engineer (MCSE) credentials continue to provide
IT professionals who use Microsoft SQL Server, Windows XP, and Windows Server 2003 with
industry recognition and validation of their IT skills and experience.

Microsoft Certified Technology Specialist


The new Microsoft Certified Tehnology Specialist (MCTS) credential highlights your skills
using a specific Microsoft technology. You can demonstrate your abilities as an IT professional
or developer with in-depth knowledge of the Microsoft technology that you use today or are
planning to deploy.
The MCTS certifications enable professionals to target specific technologies and to distinguish
themselves by demonstrating in-depth knowledge and expertise in their specialized technologies. Microsoft Certified Technology Specialists are consistently capable of implementing,
building, troubleshooting, and debugging a particular Microsoft technology.
You can learn more about the MCTS program at www.microsoft.com/learning/mcp/mcts.

Microsoft Certified IT Professional


The new Microsoft Certified IT Professional (MCITP) credential lets you highlight your
specific area of expertise. Now, you can easily distinguish yourself as an expert in engineering,
designing, and deploying messaging solutions with Microsoft SQL Server.
By becoming certified, you demonstrate to employers that you have achieved a predictable
level of skill in the use of Microsoft technologies. Employers often require certification either
as a condition of employment or as a condition of advancement within the company or other
organization.
You can learn more about the MCITP program at www.microsoft.com/learning/mcp/mcitp.
The certification examinations are sponsored by Microsoft and administered through Microsofts
exam delivery partner Prometric.

Preparing to Take an Exam


Unless you are a very experienced user, you will need to use a test preparation course to
prepare to complete the test correctly and within the time allowed. The Microsoft Official
Academic Course series is designed to prepare you with a strong knowledge of all exam topics,
and with some additional review and practice on your own, you should feel confident in your
ability to pass the appropriate exam.
After you decide which exam to take, review the list of objectives for the exam. You can easily
identify tasks that are included in the objective list by locating the Lesson Skill Matrix at the
start of each lesson and the Certification Ready sidebars in the margin of the lessons in this
book.
To take the MCITP test, visit www.microsoft.com/learning/mcp to locate your nearest testing
center. Then call the testing center directly to schedule your test. The amount of advance notice
you should provide will vary for different testing centers, and it typically depends on the number

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

Student Support Program | xv

of computers available at the testing center, the number of other testers who have already been
scheduled for the day on which you want to take the test, and the number of times per week
that the testing center offers MCITP testing. In general, you should call to schedule your test at
least two weeks prior to the date on which you want to take the test.
When you arrive at the testing center, you might be asked for proof of identity. A drivers
license or passport is an acceptable form of identification. If you do not have either of these
items of documentation, call your testing center and ask what alternative forms of identification will be accepted. If you are retaking a test, bring your MCITP identification number,
which will have been given to you when you previously took the test. If you have not prepaid
or if your organization has not already arranged to make payment for you, you will need to
pay the test-taking fee when you arrive.

Student CD
The CD-ROM included with this book contains practice exams that will help you hone
your knowledge before you take the MCITP Microsoft SQL Server Database Administrator
70443/450 certification examination. The exams are meant to provide practice for your
certification exam and are also good reinforcement of the material covered in the course.
The enclosed Student CD will run automatically. Upon accepting the license agreement, you
will proceed directly to the exams. The exams also can be accessed through the Assets folder
located within the CD files.

Microsoft SQL Server Books Online


This set of online documentation helps you understand SQL Server and how to implement
data management and business intelligence projects. SQL Server Books Online is referred to
throughout this text as a valuable supplement to your work with SQL Server. You can find
SQL Server Books Online at http://msdn.microsoft.com/en-us/library/ms130214(SQL.90).
aspx and http://msdn.microsoft.com/en-us/library/ms130214.aspx.

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

About the Authors

Dave Owen graduated from California State Polytechnic College as an Electronic Engineer
with an emphasis on communications theory. He did, however, have to take a programming
course: Fortran 4 with data entry on a punch card machine and submitted as a batch file for
overnight processing.
He was the seventh employee hired in 1971 at the Naval Civil Engineering Laboratory for
the then brand new initiative to bring naval facilities in line with environmental compliance
regulations. He ended up as the data management guy. He was a programmer (it was Rocky
Mountain BASIC then); he was the network guy; he was the database guy (you have probably never heard of Speed for the Wang Computer System); he was the enterprise planner (he
led an effort to update the Navys data tracking system, which was approved and budgeted
at $33 million); he ran the help desk (his team was the only Naval Facilities Engineering
Command group to receive an Outstanding rating by the Inspector General); he was having
a good time.
After retirement, he visited the County of Ventura Workforce Development Division who
offered him training to become certified in Microsoft and Novell technologies. He was too
long entrenched. He needed to become current. He became a CNA, MCSE, and MCDBA.
In the fall of 1998 he started teaching at Moorpark College. He taught every certificated
computer and networking topic desired by the Department Chair and earned his MCT
two years later. In addition he started teaching at Microsoft Certified Partners for Learning
Solutions such as New Horizons. A lot of other certifications followed.
He likes teaching. He learns more from students than self study or trying to fix problems.
Students approach situations in ways he can never imagine. Understanding their perspective
provides him with infinitely more insight than he could ever glean alone. Now hes preparing
college text books and other publications.
Wayne Boyer is a consultant, systems analyst, programmer, network engineer, and information systems manager who started working with relational database systems just a short
while ago in 1978. Wayne has extensive application systems experience with manufacturing
and financial systems sometimes commonly referred to as MRP and ERP systems. Most of
Waynes experience in years past was with HP-3000 minicomputer systems running a relational database system known as Image. This experience with database systems let to Waynes
current expertise and experience with modern relational database systems such as SQL Server
and Oracle. With over 30 years of Information Technology experience, Wayne brings a depth
of real-world experience to current technology topics. Currently Wayne is teaching Microsoft
curriculum topics at Moorpark College while also consulting and providing support for a
wide variety of clients on a range of IT related subjects. He has also acquired a number of
industry certifications: MCSE, MCDBA, MCITP for SQL Server 2005, and MCITP for
Enterprise Support. Currently Wayne is working toward Cisco networking certification as
well as an upgrade to MCITP for SQL Server 2008.

xvi |

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

About the Author | xvii

Steve Jones has been working with SQL Server for more than a decade, starting with v4.2 on
OS/2 and enjoying the new features and capabilities of every version since. After working as a
DBA and developer for a variety of companies, Steve founded SQLServerCentral.com along
with Brian Knight and Andy Warren in 2001. SQLServerCentral.com has grown into a wonderful SQL Server community that provides daily articles and questions on all aspects of SQL
Server to over 300,000 members. Starting in 2004, Steve became the full-time editor of the
community and ensures it continues to evolve into the best resource possible for SQL Server
professionals. Over the last decade, Steve has written more than 200 articles about SQL
Server for SQLServerCentral.com, the SQL Server Standard magazine, SQL Server Magazine,
and Database Journal. Steve has spoken at the PASS Summits where SQLServerCentral.com
sponsors an opening reception every year as well as written a prior book for Sybex on SQL
Server 2000.
David W. Tschanz is the coauthor of the recent Sybex book Mastering SQL Server 2005. He
has been working with and managing large datasets for four decades. His work has included
analysis of population dynamics, voting behavior, and epidemiological data. He has been
writing on computer topics for the past several years, including 4 books and about 100
articles in the area. He is also a regular contributor to Redmond magazine. Dave currently lives
outside the United States, where his eclectic nature allows him to pursue projects involving
databases, IT infrastructure, web development, archaeology, the ancient Nabataean capital
of Petra, medical history, military science, and demography. He can be reached by e-mail at
desertwriter1121@yahoo.com, or look for him in Connecticut, Saudi Arabia, or Tasmania, his
three favorite haunts.

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

Acknowledgments

MOAC Instructor Advisory Board


We thank our Instructor Advisory Board, an elite group of educators who has assisted us every
step of the way in building these products. Advisory Board members have acted as our sounding
board on key pedagogical and design decisions leading to the development of these compelling
and innovative textbooks for future Information Workers. Their dedication to technology
education is truly appreciated.

Charles DeSassure, Tarrant County College


Charles DeSassure is Department Chair and Instructor of Computer Science & Information
Technology at Tarrant County College Southeast Campus, Arlington, Texas. He has had
experience as a MIS Manager, system analyst, field technology analyst, LAN Administrator,
microcomputer specialist, and public school teacher in South Carolina. DeSassure has worked
in higher education for more than ten years and received the Excellence Award in Teaching
from the National Institute for Staff and Organizational Development (NISOD). He currently
serves on the Educational Testing Service (ETS) iSkills National Advisory Committee and
chaired the Tarrant County College District Student Assessment Committee. He has written
proposals and makes presentations at major educational conferences nationwide. DeSassure has
served as a textbook reviewer for John Wiley & Sons and Prentice Hall. He teaches courses in
information security, networking, distance learning, and computer literacy. DeSassure holds a
masters degree in Computer Resources & Information Management from Webster University.

Kim Ehlert, Waukesha County Technical College


Kim Ehlert is the Microsoft Program Coordinator and a Network Specialist instructor at
Waukesha County Technical College, teaching the full range of MCSE and networking courses
for the past nine years. Prior to joining WCTC, Kim was a professor at the Milwaukee School of
Engineering for five years where she oversaw the Novell Academic Education and the Microsoft
IT Academy programs. She has a wide variety of industry experience including network design
and management for Johnson Controls, local city fire departments, police departments, large
church congregations, health departments, and accounting firms. Kim holds many industry certifications including MCDST, MCSE, Security, Network, Server, MCT, and CNE.
Kim has a bachelors degree in Information Systems and a masters degree in Business
Administration from the University of Wisconsin Milwaukee. When she is not busy teaching, she enjoys spending time with her husband Gregg and their two childrenAlex, 14, and
Courtney, 17.

Penny Gudgeon, Corinthian Colleges, Inc.


Penny Gudgeon is the Program Manager for IT curriculum at Corinthian Colleges, Inc.
Previously, she was responsible for computer programming and web curriculum for twentyseven campuses in Corinthians Canadian division, CDI College of Business, Technology and
Health Care. Penny joined CDI College in 1997 as a computer programming instructor at
one of the campuses outside of Toronto. Prior to joining CDI College, Penny taught productivity software at another Canadian college, the Academy of Learning, for four years. Penny
has experience in helping students achieve their goals through various learning models from
instructor-led to self-directed to online.
xviii |

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

Acknowledgments | xix

Before embarking on a career in education, Penny worked in the fields of advertising, marketing/sales, mechanical and electronic engineering technology, and computer programming. When
not working from her home office or indulging her passion for lifelong learning, Penny likes to
read mysteries, garden, and relax at home in Hamilton, Ontario, with her Shih-Tzu, Gracie.

Margaret Leary, Northern Virginia Community College


Margaret Leary is Professor of IST at Northern Virginia Community College, teaching
Networking and Network Security Courses for the past ten years. She is the co-Principal
Investigator on the CyberWATCH initiative, an NSF-funded regional consortium of higher
education institutions and businesses working together to increase the number of network
security personnel in the workforce. She also serves as a Senior Security Policy Manager and
Research Analyst at Nortel Government Solutions and holds a CISSP certification.
Margaret holds a B.S.B.A. and MBA/Technology Management from the University
of Phoenix, and is pursuing her Ph.D. in Organization and Management with an IT
Specialization at Capella University. Her dissertation is titled Quantifying the Discoverability
of Identity Attributes in Internet-Based Public Records: Impact on Identity Theft and
Knowledge-based Authentication. She has several other published articles in various government and industry magazines, notably on identity management and network security.

Wen Liu, ITT Educational Services, Inc.


Wen Liu is Director of Corporate Curriculum Development at ITT Educational Services,
Inc. He joined the ITT corporate headquarters in 1998 as a Senior Network Analyst to
plan and deploy the corporate WAN infrastructure. A year later he assumed the position
of Corporate Curriculum Manager supervising the curriculum development of all IT programs. After he was promoted to the current position three years ago, he continued to manage the curriculum research and development for all the programs offered in the School of
Information Technology in addition to supervising the curriculum development in other areas
(such as Schools of Drafting and Design and Schools of Electronics Technology). Prior to his
employment with ITT Educational Services, Liu was a Telecommunications Analyst at the
state government of Indiana working on the state backbone project that provided Internet
and telecommunications services to the public users such as K-12 and higher education
institutions, government agencies, libraries, and healthcare facilities.
Wen Liu has an M.A. in Student Personnel Administration in Higher Education and an
M.S. in Information and Communications Sciences from Ball State University, Indiana.
He used to be the director of special projects on the board of directors of the Indiana
Telecommunications User Association, and used to serve on Course Technologys IT Advisory
Board. He is currently a member of the IEEE and its Computer Society.

Jared Spencer, Westwood College Online


Jared Spencer has been the Lead Faculty for Networking at Westwood College Online since
2006. He began teaching in 2001 and has taught both on-ground and online for a variety of
institutions, including Robert Morris University and Point Park University. In addition to his
academic background, he has more than fifteen years of industry experience working for companies including the Thomson Corporation and IBM.
Jared has a masters degree in Internet Information Systems and is currently ABD and
pursuing his doctorate in Information Systems at Nova Southeastern University. He has
authored several papers that have been presented at conferences and appeared in publications such as the Journal of Internet Commerce and the Journal of Information Privacy
and Security (JIPC). He holds a number of industry certifications, including AIX (UNIX),
A, Network, Security, MCSA on Windows 2000, and MCSA on Windows 2003
Server.

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

xx | Acknowledgments

We thank Steve Strom from Butler Community College for his diligent review, providing
invaluable feedback in the service of quality instructional materials.

Focus Group and Survey Participants


Finally, we thank the hundreds of instructors who participated in our focus groups and surveys
to ensure that the Microsoft Official Academic Courses best met the needs of our customers.

Jean Aguilar, Mt. Hood Community


College
Konrad Akens, Zane State College
Michael Albers, University of
Memphis
Diana Anderson, Big Sandy
Community & Technical College
Phyllis Anderson, Delaware County
Community College
Judith Andrews, Feather River College
Damon Antos, American River
College
Bridget Archer, Oakton Community
College
Linda Arnold, Harrisburg Area
Community CollegeLebanon
Campus
Neha Arya, Fullerton College
Mohammad Bajwa, Katharine Gibbs
SchoolNew York
Virginia Baker, University of Alaska
Fairbanks
Carla Bannick, Pima Community
College
Rita Barkley, Northeast Alabama
Community College
Elsa Barr, Central Community College
Hastings
Ronald W. Barry, Ventura County
Community College District
Elizabeth Bastedo, Central Carolina
Technical College
Karen Baston, Waubonsee Community
College
Karen Bean, Blinn College
Scott Beckstrand, Community College
of Southern Nevada
Paulette Bell, Santa Rosa Junior College
Liz Bennett, Southeast Technical
Institute
Nancy Bermea, Olympic College
Lucy Betz, Milwaukee Area Technical
College
Meral Binbasioglu, Hofstra University

Catherine Binder, Strayer University


& Katharine Gibbs School
Philadelphia
Terrel Blair, El Centro College
Ruth Blalock, Alamance Community
College
Beverly Bohner, Reading Area
Community College
Henry Bojack, Farmingdale State
University
Matthew Bowie, Luna Community
College
Julie Boyles, Portland Community
College
Karen Brandt, College of the Albemarle
Stephen Brown, College of San Mateo
Jared Bruckner, Southern Adventist
University
Pam Brune, Chattanooga State
Technical Community College
Sue Buchholz, Georgia Perimeter College
Roberta Buczyna, Edison College
Angela Butler, Mississippi Gulf Coast
Community College
Rebecca Byrd, Augusta Technical College
Kristen Callahan, Mercer County
Community College
Judy Cameron, Spokane Community
College
Dianne Campbell, Athens Technical
College
Gena Casas, Florida Community
College at Jacksonville
Jesus Castrejon, Latin Technologies
Gail Chambers, Southwest Tennessee
Community College
Jacques Chansavang, Indiana
UniversityPurdue University Fort
Wayne
Nancy Chapko, Milwaukee Area
Technical College
Rebecca Chavez, Yavapai College
Sanjiv Chopra, Thomas Nelson
Community College

Greg Clements, Midland Lutheran


College
Dayna Coker, Southwestern Oklahoma
State UniversitySayre Campus
Tamra Collins, Otero Junior College
Janet Conrey, Gavilan Community
College
Carol Cornforth, West Virginia
Northern Community College
Gary Cotton, American River College
Edie Cox, Chattahoochee Technical
College
Rollie Cox, Madison Area Technical
College
David Crawford, Northwestern
Michigan College
J.K. Crowley, Victor Valley College
Rosalyn Culver, Washtenaw
Community College
Sharon Custer, Huntington University
Sandra Daniels, New River Community
College
Anila Das, Cedar Valley College
Brad Davis, Santa Rosa Junior College
Susan Davis, Green River Community
College
Mark Dawdy, Lincoln Land
Community College
Jennifer Day, Sinclair Community
College
Carol Deane, Eastern Idaho Technical
College
Julie DeBuhr, Lewis-Clark State College
Janis DeHaven, Central Community
College
Drew Dekreon, University of Alaska
Anchorage
Joy DePover, Central Lakes College
Salli DiBartolo, Brevard Community
College
Melissa Diegnau, Riverland
Community College
Al Dillard, Lansdale School of Business
Marjorie Duffy, Cosumnes River College

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

Acknowledgments | xxi

Sarah Dunn, Southwest Tennessee


Community College
Shahla Durany, Tarrant County
CollegeSouth Campus
Kay Durden, University of Tennessee at
Martin
Dineen Ebert, St. Louis Community
CollegeMeramec
Donna Ehrhart, State University of
New YorkBrockport
Larry Elias, Montgomery County
Community College
Glenda Elser, New Mexico State
University at Alamogordo
Angela Evangelinos, Monroe County
Community College
Angie Evans, Ivy Tech Community
College of Indiana
Linda Farrington, Indian Hills
Community College
Dana Fladhammer, Phoenix College
Richard Flores, Citrus College
Connie Fox, Community and
Technical College at Institute
of Technology West Virginia
University
Wanda Freeman, Okefenokee
Technical College
Brenda Freeman, Augusta Technical
College
Susan Fry, Boise State University
Roger Fulk, Wright State University
Lake Campus
Sue Furnas, Collin County
Community College District
Sandy Gabel, Vernon College
Laura Galvan, Fayetteville Technical
Community College
Candace Garrod, Red Rocks
Community College
Sherrie Geitgey, Northwest State
Community College
Chris Gerig, Chattahoochee Technical
College
Barb Gillespie, Cuyamaca College
Jessica Gilmore, Highline Community
College
Pamela Gilmore, Reedley College
Debbie Glinert, Queensborough
Community College
Steven Goldman, Polk Community
College
Bettie Goodman, C.S. Mott
Community College

Mike Grabill, Katharine Gibbs


SchoolPhiladelphia
Francis Green, Penn State University
Walter Griffin, Blinn College
Fillmore Guinn, Odessa College
Helen Haasch, Milwaukee Area
Technical College
John Habal, Ventura College
Joy Haerens, Chaffey College
Norman Hahn, Thomas Nelson
Community College
Kathy Hall, Alamance Community
College
Teri Harbacheck, Boise State University
Linda Harper, Richland Community
College
Maureen Harper, Indian Hills
Community College
Steve Harris, Katharine Gibbs School
New York
Robyn Hart, Fresno City College
Darien Hartman, Boise State
University
Gina Hatcher, Tacoma Community
College
Winona T. Hatcher, Aiken Technical
College
BJ Hathaway, Northeast Wisconsin Tech
College
Cynthia Hauki, West Hills College
Coalinga
Mary L. Haynes, Wayne County
Community College
Marcie Hawkins, Zane State College
Steve Hebrock, Ohio State
University Agricultural Technical
Institute
Sue Heistand, Iowa Central Community
College
Heith Hennel, Valencia Community
College
Donna Hendricks, South Arkansas
Community College
Judy Hendrix, Dyersburg State
Community College
Gloria Hensel, Matanuska-Susitna
College University of Alaska
Anchorage
Gwendolyn Hester, Richland College
Tammarra Holmes, Laramie County
Community College
Dee Hobson, Richland College
Keith Hoell, Katharine Gibbs School
New York

Pashia Hogan, Northeast


State Technical Community
College
Susan Hoggard, Tulsa Community
College
Kathleen Holliman, Wallace
Community College Selma
Chastity Honchul, Brown Mackie
College/Wright State University
Christie Hovey, Lincoln Land
Community College
Peggy Hughes, Allegany College of
Maryland
Sandra Hume, Chippewa Valley
Technical College
John Hutson, Aims Community
College
Celia Ing, Sacramento City College
Joan Ivey, Lanier Technical College
Barbara Jaffari, College of the
Redwoods
Penny Jakes, University of Montana
College of Technology
Eduardo Jaramillo, Peninsula College
Barbara Jauken, Southeast Community
College
Susan Jennings, Stephen F. Austin
State University
Leslie Jernberg, Eastern Idaho Technical
College
Linda Johns, Georgia Perimeter
College
Brent Johnson, Okefenokee Technical
College
Mary Johnson, Mt. San Antonio College
Shirley Johnson, Trinidad State Junior
CollegeValley Campus
Sandra M. Jolley, Tarrant County
College
Teresa Jolly, South Georgia Technical
College
Dr. Deborah Jones, South Georgia
Technical College
Margie Jones, Central Virginia
Community College
Randall Jones, Marshall Community
and Technical College
Diane Karlsbraaten, Lake Region State
College
Teresa Keller, Ivy Tech Community
College of Indiana
Charles Kemnitz, Pennsylvania College
of Technology
Sandra Kinghorn, Ventura College

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

xxii | Acknowledgments

Bill Klein, Katharine Gibbs School


Philadelphia
Bea Knaapen, Fresno City College
Kit Kofoed, Western Wyoming
Community College
Maria Kolatis, County College of Morris
Barry Kolb, Ocean County College
Karen Kuralt, University of Arkansas
at Little Rock
Belva-Carole Lamb, Rogue Community
College
Betty Lambert, Des Moines Area
Community College
Anita Lande, Cabrillo College
Junnae Landry, Pratt Community
College
Karen Lankisch, UC Clermont
David Lanzilla, Central Florida
Community College
Nora Laredo, Cerritos Community
College
Jennifer Larrabee, Chippewa Valley
Technical College
Debra Larson, Idaho State University
Barb Lave, Portland Community College
Audrey Lawrence, Tidewater
Community College
Deborah Layton, Eastern Oklahoma
State College
Larry LeBlanc, Owen Graduate
SchoolVanderbilt University
Philip Lee, Nashville State Community
College
Michael Lehrfeld, Brevard Community
College
Vasant Limaye, Southwest Collegiate
Institute for the Deaf Howard
College
Anne C. Lewis, Edgecombe
Community College
Stephen Linkin, Houston Community
College
Peggy Linston, Athens Technical College
Hugh Lofton, Moultrie Technical
College
Donna Lohn, Lakeland Community
College
Jackie Lou, Lake Tahoe Community
College
Donna Love, Gaston College
Curt Lynch, Ozarks Technical
Community College
Sheilah Lynn, Florida Community
CollegeJacksonville

Pat R. Lyon, Tomball College


Bill Madden, Bergen Community
College
Heather Madden, Delaware Technical
& Community College
Donna Madsen, Kirkwood Community
College
Jane Maringer-Cantu, Gavilan College
Suzanne Marks, Bellevue Community
College
Carol Martin, Louisiana State
UniversityAlexandria
Cheryl Martucci, Diablo Valley College
Roberta Marvel, Eastern Wyoming
College
Tom Mason, Brookdale Community
College
Mindy Mass, Santa Barbara City College
Dixie Massaro, Irvine Valley College
Rebekah May, Ashland Community
& Technical College
Emma Mays-Reynolds, Dyersburg
State Community College
Timothy Mayes, Metropolitan State
College of Denver
Reggie McCarthy, Central Lakes College
Matt McCaskill, Brevard Community
College
Kevin McFarlane, Front Range
Community College
Donna McGill, Yuba Community
College
Terri McKeever, Ozarks Technical
Community College
Patricia McMahon, South Suburban
College
Sally McMillin, Katharine Gibbs
SchoolPhiladelphia
Charles McNerney, Bergen Community
College
Lisa Mears, Palm Beach Community
College
Imran Mehmood, ITT Technical
InstituteKing of Prussia Campus
Virginia Melvin, Southwest Tennessee
Community College
Jeanne Mercer, Texas State Technical
College
Denise Merrell, Jefferson Community
& Technical College
Catherine Merrikin, Pearl River
Community College
Diane D. Mickey, Northern Virginia
Community College

Darrelyn Miller, Grays Harbor College


Sue Mitchell, Calhoun Community
College
Jacquie Moldenhauer, Front Range
Community College
Linda Motonaga, Los Angeles City
College
Sam Mryyan, Allen County
Community College
Cindy Murphy, Southeastern
Community College
Ryan Murphy, Sinclair Community
College
Sharon E. Nastav, Johnson County
Community College
Christine Naylor, Kent State
University Ashtabula
Haji Nazarian, Seattle Central
Community College
Nancy Noe, Linn-Benton Community
College
Jennie Noriega, San Joaquin Delta
College
Linda Nutter, Peninsula College
Thomas Omerza, Middle Bucks
Institute of Technology
Edith Orozco, St. Philips College
Dona Orr, Boise State University
Joanne Osgood, Chaffey College
Janice Owens, Kishwaukee College
Tatyana Pashnyak, Bainbridge College
John Partacz, College of DuPage
Tim Paul, Montana State University
Great Falls
Joseph Perez, South Texas College
Mike Peterson, Chemeketa
Community College
Dr. Karen R. Petitto, West Virginia
Wesleyan College
Terry Pierce, Onandaga Community
College
Ashlee Pieris, Raritan Valley Community
College
Jamie Pinchot, Thiel College
Michelle Poertner, Northwestern
Michigan College
Betty Posta, University of Toledo
Deborah Powell, West Central Technical
College
Mark Pranger, Rogers State University
Carolyn Rainey, Southeast Missouri
State University
Linda Raskovich, Hibbing Community
College

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

Acknowledgments | xxiii

Leslie Ratliff, Griffin Technical College


Mar-Sue Ratzke, Rio Hondo
Community College
Roxy Reissen, Southeastern Community
College
Silvio Reyes, Technical Career Institutes
Patricia Rishavy, Anoka Technical
College
Jean Robbins, Southeast Technical
Institute
Carol Roberts, Eastern Maine
Community College and University
of Maine
Teresa Roberts, Wilson Technical
Community College
Vicki Robertson, Southwest Tennessee
Community College
Betty Rogge, Ohio State Agricultural
Technical Institute
Lynne Rusley, Missouri Southern State
University
Claude Russo, Brevard Community
College
Ginger Sabine, Northwestern Technical
College
Steven Sachs, Los Angeles Valley College
Joanne Salas, Olympic College
Lloyd Sandmann, Pima Community
CollegeDesert Vista Campus
Beverly Santillo, Georgia Perimeter
College
Theresa Savarese, San Diego City College
Sharolyn Sayers, Milwaukee Area
Technical College
Judith Scheeren, Westmoreland
County Community College
Adolph Scheiwe, Joliet Junior College
Marilyn Schmid, Asheville-Buncombe
Technical Community College
Janet Sebesy, Cuyahoga Community
College
Phyllis T. Shafer, Brookdale
Community College
Ralph Shafer, Truckee Meadows
Community College
Anne Marie Shanley, County College
of Morris
Shelia Shelton, Surry Community
College
Merilyn Shepherd, Danville Area
Community College
Susan Sinele, Aims Community College

Beth Sindt, Hawkeye Community


College
Andrew Smith, Marian College
Brenda Smith, Southwest Tennessee
Community College
Lynne Smith, State University of New
YorkDelhi
Rob Smith, Katharine Gibbs School
Philadelphia
Tonya Smith, Arkansas State
UniversityMountain Home
Del Spencer Trinity Valley
Community College
Jeri Spinner, Idaho State University
Eric Stadnik, Santa Rosa Junior College
Karen Stanton, Los Medanos College
Meg Stoner, Santa Rosa Junior College
Beverly Stowers, Ivy Tech Community
College of Indiana
Marcia Stranix, Yuba College
Kim Styles, Tri-County Technical College
Sylvia Summers, Tacoma Community
College
Beverly Swann, Delaware Technical &
Community College
Ann Taff, Tulsa Community College
Mike Theiss, University of Wisconsin
Marathon Campus
Romy Thiele, Caada College
Sharron Thompson, Portland
Community College
Ingrid Thompson-Sellers, Georgia
Perimeter College
Barbara Tietsort, University of
CincinnatiRaymond Walters College
Janine Tiffany, Reading Area
Community College
Denise Tillery, University of Nevada
Las Vegas
Susan Trebelhorn, Normandale
Community College
Noel Trout, Santiago Canyon College
Cheryl Turgeon, Asnuntuck
Community College
Steve Turner, Ventura College
Sylvia Unwin, Bellevue Community
College
Lilly Vigil, Colorado Mountain College
Sabrina Vincent, College of the
Mainland
Mary Vitrano, Palm Beach
Community College

Brad Vogt, Northeast Community


College
Cozell Wagner, Southeastern
Community College
Carolyn Walker, Tri-County Technical
College
Sherry Walker, Tulsa Community College
Qi Wang, Tacoma Community College
Betty Wanielista, Valencia Community
College
Marge Warber, Lanier Technical
CollegeForsyth Campus
Marjorie Webster, Bergen Community
College
Linda Wenn, Central Community
College
Mark Westlund, Olympic College
Carolyn Whited, Roane State
Community College
Winona Whited, Richland College
Jerry Wilkerson, Scott Community
College
Joel Willenbring, Fullerton College
Barbara Williams, WITC Superior
Charlotte Williams, Jones County
Junior College
Bonnie Willy, Ivy Tech Community
College of Indiana
Diane Wilson, J. Sargeant Reynolds
Community College
James Wolfe, Metropolitan
Community College
Marjory Wooten, Lanier Technical
College
Mark Yanko, Hocking College
Alexis Yusov, Pace University
Naeem Zaman, San Joaquin Delta
College
Kathleen Zimmerman, Des Moines
Area Community College
We also thank Lutz Ziob, Merrick Van
Dongen, Jim LeValley, Bruce Curling,
Joe Wilson, Rob Linsky, Jim Clark,
Jim Palmeri, Scott Serna, Ben Watson,
and David Bramble at Microsoft for
their encouragement and support
in making the Microsoft Official
Academic Course programs the finest
instructional materials for mastering
the newest Microsoft technologies for
both students and instructors.

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

This page intentionally left blank

Brief Contents

Preface

1
2

iv

Designing the Hardware and Software Infrastructure


Designing Physical Storage

30

3 Designing a Consolidation Strategy 58


4 Analyzing and Designing Security 87
5 Designing Windows Server-Level Security 107
6 Designing SQL Server Service-Level and Database-Level Security 129
7
8

Designing SQL Server Object-Level Security


Designing a Physical Database

150

168

9
10

Creating Database Conventions and Standards 194

11
12

Designing a Data Recovery Solution for a Database

Designing a SQL Server Solution for High Availability


Designing a Data-Archiving Solution

Glossary
Index

209
242

265

285

287

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

| xxv

This page intentionally left blank

Contents

Lesson 1: Designing the Hardware and


Software Infrastructure 1
Lesson Skill Matrix 1
Key Terms 1
Analyzing the Current Configuration

Thinking Holistically 3
Assessing the Current Configuration 3
Accommodating Changing Capacity Requirements

Designing for Capacity Requirements 6


Analyzing Storage Requirements 6
Forecasting and Planning Storage Requirements 7
Analyzing Network Requirements 9
Analyzing CPU Requirements 11
Analyzing Memory Requirements 12

Specifying Software Versions And Hardware


Configurations 13
Following Best Practices 14
Choosing a Version and Edition of the Operating
System 14
Choosing an Edition of SQL Server 20
Choosing a CPU Type 22
Choosing Memory Options 23
Determining Storage Requirements 24
Planning for Hot Add CPUs and RAM 24

Skill Summary 25
Knowledge Assessment
Case Study 28

26

Lesson 2: Designing Physical


Storage 30
Lesson Skill Matrix 30
Key Terms 30
Understanding SQL Server Storage
Concepts 31
Understanding Data Files and Transaction
Log Files 31
Understanding Pages 32
Understanding Extents 33

Estimating Database Size 33


Planning for Capacity 34
Data Compression 34
Sparse Columns 35
Understanding RAID 36
Designing Transaction Log Storage 37
Managing Transaction Log File Size 38
Designing Backup-File Storage 41
Managing Your Backups 41
Maintaining Transaction Log Backups 42
Backup Compression 42
Deciding Where to Install the Operating System 43
Deciding Where to Place SQL Server Service
Executables 43
Specifying the Number and Placement of Files for
Each Database 44
Setting Up Database Files 44
Setting Up Filenames 45
Setting Up File Size 45
Setting Up Database Filegroups 45
Designing Instances 45
Deciding on the Number of Instances 46
Deciding How to Name Instances 47
Deciding How Many Physical Servers Are
Needed 49
Deciding Where to Place System Databases for each
Instance 49

Deciding on the Tempdb Database Physical Storage 50


Establishing Service Requirements 52
Specifying Instance Configurations 52

Skill Summary 54
Knowledge Assessment
Case Study 55

55

Lesson 3: Designing a Consolidation


Strategy 58
Lesson Skill Matrix 58
Key Terms 58
Phase 1: Envisioning 59

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

| xxvii

xxviii | Contents
Forming a Team 59
Making the Decision to Consolidate 60
Developing Guidelines for the Consolidation
Project 65
Examining Your Environment 66

Phase 2: Planning 73
Evaluating Your Data 74
Making Initial Decisions about the Plan 75
Case Study: Consolidating and Clustering 77
Planning to Migrate Applications 78
Case Study: Avoiding Scope Creep 79

Phase 3: Developing 80
Acquiring Your Hardware 80
Creating the Proof of Concept 81
Creating the Pilot 81

Phase 4: Deploying 82
Skill Summary 83
Knowledge Assessment 83
Case Study 83

Lesson 4: Analyzing and Designing


Security 87
Lesson Skill Matrix 87
Key Terms 87
Gathering Your Security Requirements 88
Case Study: Gathering Requirements 89
Understanding Security Scope 89
Analyzing Your Security Requirements 90
Dealing with Conflicting Requirements 91
Analyzing the Cost of Requirements 92
Integrating with the Enterprise 93
Choosing an Authentication Method 94
Setting Up Using Groups and Roles 94
Assessing the Impact of Network
Policies 96

Achieving High Availability in a Secure


Way 97
Mitigating Server Attacks 99
Protecting Backups 101
Auditing Access 101
Making Security Recommendations 102
Performing Ongoing Reviews 102
Skill Summary 103
Knowledge Assessment 103
Case Study 103

Lesson 5: Designing Windows


Server-Level Security

107

Lesson Skill Matrix 107


Key Terms 107
Understanding Password Rules 108
Enforcing the Password Policy 109
Enforcing Password Expiration 109
Enforcing a Password Change at the Next Login 110
Following Password Best Practices 110

Setting Up the Encryption Policy 110


Understanding the Encryption Hierarchy 111
Using Symmetric and Asymmetric Keys 111
Using Certificates 112
Considering Performance Issues 112
Developing an Encryption Policy 113
Managing Keys 114
Choosing Keys 114
Extensible Key Management 114
Introducing SQL Server Service Accounts 115
Understanding the SQL Server Services 115
Choosing a Service Account 117
Choosing a Domain User 118
Choosing a Local Service 118
Choosing a Network Service 119
Choosing a Local System 119
Case Study: Planning for Services 119
Changing Service Accounts 119
Setting Up Antivirus Software 122
Working with Services 123
Configuring Server Firewalls 124
Physically Securing Your Servers 125
Skill Summary 125
Knowledge Assessment 126
Case Study 126

Lesson 6: Designing SQL Server


Service-Level and DatabaseLevel Security 129
Lesson Skill Matrix 129
Key Terms 129
Creating Logins 130
Granting Server Roles 131
Mapping Database Users to Roles 132

Securing Schemas 133

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

Contents | xxix

Granting Database Roles 134


Working with Fixed Database Roles 134
Working with User-Defined Roles 135
Using Application Roles 136

Introducing DDL Triggers 137


Understanding DDL Trigger Scope 137
Specifying DDL Trigger Events 138
Defining a DDL Trigger Policy 139

Defining a Database-Level Encryption Policy 140


Transparent Data Encryption 140
Securing Endpoints 141
Introducing TDS Endpoints 142
Using SOAP/Web Service Endpoints 142
Working with Service Broker and Database Mirroring
Endpoints 143
Defining an Endpoint Policy 143

Granting SQL Server Agent Job Roles 144


Case Study: Specifying Proxies 144
Designing .NET Assembly Security 145
Setting SAFE 145
Setting EXTERNAL_ACCESS 146
Setting UNSAFE 146
Skill Summary 146
Knowledge Assessment 147
Case Study 147

Lesson 7: Designing SQL Server


Object-Level Security 150
Lesson Skill Matrix 150
Key Terms 150
Developing a Permissions Strategy 151
Understanding Permissions 152
Applying Specific Permissions 153

Analyzing Existing Permissions 154


Specifying the Execution Context 155
Implementing EXECUTE AS for an Object 155
Case Study: Developing an EXECUTE AS Policy for
an Object 156
Implementing EXECUTE AS in Batches 157
Auditing 158
Developing an EXECUTE AS Policy for Batches 159

Specifying Column-Level Encryption 159


Choosing Keys 160
Deploying Encryption

160

Using CLR Security 161


Creating Assemblies 161
Accessing External Resources 162
Developing a CLR Policy 163

Skill Summary

164

Knowledge Assessment
Case Study 165

165

Lesson 8: Designing a Physical


Database 168
Lesson Skill Matrix 168
Key Terms 169
Modifying a Database Design Based on Performance
and Business Requirements 170
Planning a Database 170
Ensuring That a Database Is Normalized 171
Allowing Selected Denormalization for Performance
Purposes 171
Ensuring That the Database Is Documented and
Diagrammed 172

Designing Tables 173


Deciding Whether Partitioning Is Appropriate 174
Specifying Primary and Foreign Keys 175
Choosing a Primary Key 176
Using Constraints 180
Deciding Whether to Persist Computed Columns 182
Specifying Physical Location of Tables, Including Filegroups
and a Partitioning Scheme 182

Designing Filegroups 182


Designing Filegroups for Performance 183
Designing Filegroups for Recoverability 184
Designing Filegroups for Partitioning 184

Designing Index Usage 184


Designing Indexes to Make Data Access Faster and to
Improve Data Modification 185
Creating Indexes with the Database Tuning Advisor 186
Specifying Physical Placement of Indexes 186

Designing Views 187


Analyzing Business Requirements 187
Choosing the Type of View 188
Specifying Row and Column Filtering 189

Skill Summary 189


Knowledge Assessment
Case Study 190

190

Lesson 9: Creating Database


Conventions 194
Lesson Skill Matrix 194
Key Terms 194
Understanding the Benefits of Database Naming
Conventions 195
Establishing and Disseminating Naming
Conventions 196

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

xxx | Contents

Defining Database Standards

200

Transact-SQL Coding Standards 200


Defining Database Access Standards 201
Deployment Process Standards 203
Database Security Standards 205

Skill Summary 205


Knowledge Assessment

206

Lesson 10: Designing a SQL Server


Solution for High
Availability 209
Lesson Skill Matrix 209
Key Terms 210
Examining High-Availability Technologies 211
Identifying Single Points of Failure 211
Setting High-Availability System Goals 212
Recognizing High-Availability System
Limitations 213

Understanding Clustering 214


Understanding Clustering Requirements
Designing a Clustering Solution 216
Clustering Enhancements 218
Considering Geographic Design 219
Making Hardware Decisions 219
Addressing Licensing Costs 220

215

Understanding Database Mirroring 220


Designing Server Roles for Database
Mirroring 221
Understanding Protection Levels 222
Designing a Database-Mirroring Solution 223
Configuring a Database-Mirroring Solution 224
Testing Database Mirroring 224
Mirroring Enhancements 225

Understanding Log Shipping 226


Choosing Log-Shipping Roles 226
Switching Log-Shipping Roles 227
Reconnecting Client Applications 227

Understanding Replication 228


Implementing High Availability with Transactional
Replication 229
Case Study: Handling Conflicts 229
Implementing High Availability with Merge
Replication 230
Designing Highly Available Storage 230
Designing a High-Availability Solution 233

Developing a Migration Strategy 235


Testing Your Migration

236

Minimizing Downtime 236


Implementing Address Abstraction
Training Your Staff 237

Skill Summary 237


Knowledge Assessment
Case Study 238

237

238

Lesson 11: Designing a Data


Recovery Solution for a
Database 242
Lesson Skill Matrix 242
Key Terms 242
Backing Up Data 243
Restoring Databases 246
Devising a Backup Strategy 249
Designing a Backup and Restore Strategy:
The Process 251
Choosing a Recovery Model 253

Developing Database Mitigation Plans 257


Skill Summary 261
Knowledge Assessment 262
Case Study 262

Lesson 12: Designing a DataArchiving Solution

265

Lesson Skill Matrix 265


Key Terms 265
Deciding to Archive Data? 266
Determining Business and Regulatory
Requirements 267
Case Study: Presenting a Data-Archiving Scenario
Determining What Data Will Be Archived 269
Developing a Data-Movement Strategy 273

Designing a Replication Topology

267

274

Introducing Transactional Replication 276

Selecting a Storage Media Type and Format


Skill Summary 281
Knowledge Assessment 281
Case Study 281
Glossary 285
Index 287

www.wiley.com/college/microsoft or
call the MOAC Toll-Free Number: 1+(888) 764-7001 (U.S. & Canada only)

271

Designing the
Hardware and
Software Infrastructure

L ESSON

L E S S O N S K I L L M AT R I X
TECHNOLOGY SKILL

70-443 EXAM OBJECTIVE

Design for capacity requirements.

Foundational

Analyze storage requirements.

Foundational

Analyze network requirements.

Foundational

Analyze CPU requirements.

Foundational

Analyze the current configuration.

Foundational

Analyze memory requirements.

Foundational

Forecast and incorporate anticipated growth requirements into the


capacity requirements.

Foundational

Specify software versions and hardware configurations.

Foundational

Choose a version and edition of the operating system.

Foundational

Choose a version of SQL Server.

Foundational

Choose a CPU type.

Foundational

Choose memory options.

Foundational

Choose a type of storage.

Foundational

KEY TERMS
budgetary constraint: Limits
placed on your ability to invest
as much as you might wish in an
infrastructure improvement project.
capacity: A measure of the
ability to store, manipulate, and
report information collected for
the enterprise. Excess capacity
suggests a declining business
need or too much investment
in infrastructure.

horizon: A forecasting target. A


horizon too far distant may result
in capacity or other changes that
dont prove needed; a horizon too
near may result in investments
that dont meet tomorrows needs.
policies: A set of written guidelines
providing direction on how to
process any number of issues
(e.g., a corporate password policy).

regulatory requirements: A set


of compliance directions from an
external organization. This could
be a governmental agency (e.g.,
the regulator of the SarbanesOxley Act or HIPPA) or your
corporate headquarters.
security measures: The steps
taken to assure data integrity.

2 | Lesson 1

There is an old saying among carpenters and woodworkers when preparing to work with
a good or particularly special piece of hardwood: Measure it twice, and cut it once. In
other words, make sure of what youre doing and then get it right the first time, because it
might be the only chance you have.

Added to that old saying are others: No one plans to fail, they fail to plan; Failure to plan
on your part does not constitute an emergency on mine; and A house built on sand will fall
down. The point is to emphasize the role that careful planning and design of the underlying
support structure, the infrastructure, plays in the successful completion of any projectfrom a
childs dollhouse to a family vacation to a career as a database administrator.
If you were going to build a house, either for yourself or someone else, the first thing youd
want to know is how it will be used and how big it needs to be. To find out, youd ask yourself (or a client) some key questions: How much land is available on which to build? How
many people will live there? Does the couple have plans for additional children? Will it be
only a house, or will it serve as a home office? Is a separate section with a separate entrance
required? How much money and resources are available? With that information, youd then
design and build accordingly. When it comes to a database server infrastructure, you need to
do the same.
To reemphasize: Its very important that you grasp the underlying premise of this lesson and the
rest of the book. If you understand how to plan and design a database infrastructure and how to
successfully implement those plans, you will reap enormous benefits in terms of time saved and
resources properly allocated while increasing the probability that your activities will succeed.
The process of designing infrastructure often depends more on your understanding of the
underlying premises than on a single set of rules. Every infrastructure you design or work on
will need to meet unique requirementsthere is no one size fits all. Unless you understand
the hows and whys of your process, the end result will be far from satisfactory.
In this lesson, youll take the first steps toward designing a database server infrastructure. Like
anything you build, whether a birdhouse or the Great Pyramid, the foundation is the key. First
you must review strategies for assessing your current configuration and gathering data about the
current capacity of key resources such as storage, CPU, memory, and network bandwidth. You
then have to cover how to use this data, along with the business requirements of the organization, to estimate future capacity needs. The second part of the lesson will look at how to specify
software versions and hardware configurations for use in the organizations requirements.

Analyzing the Current Configuration


THE BOTTOM LINE

You must first understandcompletelythat which existsthe as-is. The second step
involves understanding what is desiredthe to be. And finally, you must understand the
plans for bridging the gapthe implementation strategy.
As you almost certainly know, youll rarely, if ever, be involved in designing a completely
new database server infrastructure. Youll nearly always be working with an organization that
has an existing infrastructure that needs to change to meet enterprise growth and to enhance
performance.
In that case, the first step is not to reach for a piece of paper to draw your dream infrastructure.
Instead, the first step is to evaluate the various subsystems of the existing infrastructure and
figure out what you have to work with. This initial evaluation process will aid you in assessing
how well the different subsystems interact and will also highlight potential trouble spots.
Next, you should gather the requirements you need to have in place for the modified
infrastructure. These requirements may be technical or business related (theyre usually both),

Designing the Hardware and Software Infrastructure | 3

and they need to be prioritized in that context. Once youve established the requirements, set the
priorities, and determined the funding levels, you can design modifications to the infrastructure.

TAKE NOTE

When designing modifications, a good practice is to standardize the hardware and software
configuration of database servers as much as possible. Doing so simplifies the design of the
infrastructure and reduces the maintenance overhead. In addition, standardization results
in significant cost savings.

Thinking Holistically
Whether before or after youve analyzed the capacity needs of the enterprises individual
database servers, you must at some pointthe earlier the betterevaluate the existing
database server infrastructure as a whole. This view can give you a quick assessment of the
overall health of the infrastructure and help you determine any recurring trouble spots.

You should also think in terms of the ideal. Are the databases optimally designed? Are
disk-storage systems being used effectively? Are CPU and memory types and allocations
appropriate? Is the network properly designed and prepared for the new infrastructure?
You should use your evaluations to determine what modifications should be made to the
infrastructure to support business growth. And you should be able to make the business case
for your recommendations.

Assessing the Current Configuration


You should take a number of steps at the outset to assess the condition of the current
database server configurations:

Download

You can download the


latest SQL Server service
packs and apply hotfixes
at the SQL Server Web
site at http://www.
microsoft.com/sqlserver.

REF

For more information


about the service accounts
for SQL Server services,
refer to the topic Setting
Up Windows Service
Accounts in SQL Server
2005 Books Online.

Check that licenses conform to your actual implementation.


Inventory the operating system versions, service packs, and hotfixes running on each
database server.
Verify whether the necessary OS service packs and hotfixes have been applied.
Identify any compatibility issues between the operating environment and the
applications running on the database server.
Inventory the SQL Server versions, service packs, and hotfixes.
Verify whether the latest service packs or hotfixes have been applied.
Inventory what SQL Server services are running on each database server and what
service accounts have been assigned to each. To do so, you can use SQL Server 2005
Configuration Manager. A short list of important services includes:
SQL Server Database Engine
SQL Server Agent
SQL Server Full-Text Search
SQL Server Reporting Services (SSRS)
SQL Server Analysis Services (SSAS)
SQL Server Browser
SQL Server Integration Services (SSIS)
Inventory hardware configurations, including disk subsystems, CPUs, memory, network
cards, and power supplies on database servers. Make note of RAID and/or SCSI use.
Identify all servers in a cluster, if a clustering environment is being used.

4 | Lesson 1

ANOTHER WAY

Use the stored procedure sp_configure with


show advanced option
to display the current
settings. SQL Server
Configuration Manager
can help you collect network configuration data,
such as the libraries,
protocols, and ports for
each instance.

REF

Lesson 2 discusses disk


subsystems and physical
storage design considerations in more detail.

Record SQL Server configuration settings. Record the minimum and maximum memory
settings, the CPUs used by the instance, and the default connection properties for each
SQL Server instance.
Review the configuration management process for proposing, approving, and implementing configuration changes, and identify opportunities to make the process more
efficient. What tools are used?
Assess the quality of the database server documentation.
Verify the capabilities of disk subsystems and physical storage. Determine whether
the RAID levels of existing disk subsystems support data availability and performance
requirements.
Determine the locations of transaction log files and data files.
Examine the use of filegroups.
Are adequate data-file sizes allocated to databases, including the tempdb database?
Verify that the AutoShrink property is set to False, to ensure that the OS files
maintaining table and index data are resized accordingly.
Determine whether disk-maintenance operations, such as defragmentation, are
performed periodically.
Assess Event Viewer errors to identify disk storage-related problems.

Accommodating Changing Capacity Requirements


Requirements analysis is key to the process of designing modifications to a database server
infrastructure. Just as you need to know the purpose of a house in order to build one that
meets your needs, you must properly identify the business requirements in order to design
your infrastructure. Otherwise, your design wont meet the needs of the organization; and
not only can you forget professional pride, youll be lucky if you still have a job.
Its essential that you always work in a collaborative way with company management, IT staff,
and end users to identify both the technical and business requirements that the new database
infrastructure is expected to support.
There is an intricate dance between the technical aspects and the nontechnical aspects of a
project, and weaving them together seamlessly is one of your most important, if never really
specified, jobs. Technical aspects and requirements typically focus on tasks such as capacity,
archiving, distribution, and other needs. These are affected by business requirements that
include budgetary and legal restrictions, company IT policies, and so on. Successful comprehension of both sets of requirements allows you to know precisely the scope of modifications
to the infrastructure and establishes a valuable foundation on which to base your design and
modification decisions.
When designing modifications to a database server infrastructure, you must consider its
future capacity needs based on the projected business growth of the organization. In addition,
you must consider requirements pertaining to data archiving, database server consolidation,
and data distribution.

CONSIDERING TECHNICAL REQUIREMENTS


The rest of this lesson introduces specific capacity needs, usually when talking about a specific
server. Its crucial to a successful design that you analyze and identify the capacity requirements of the database server infrastructure as a whole.
Because its difficult to extrapolate the capacity needs of the entire infrastructure, you may not
always be able to project growth except in qualitative and general terms. You should, nonetheless, answer these questions for your future planning estimates and projections:

Designing the Hardware and Software Infrastructure | 5

REF

Lesson 12 covers data


archiving in detail.

REF

Lesson 3 covers database


consolidation.

Is the enterprise planning, or likely, to experience growth through increases in business


operations, customer base, or increased demand for databases?
Are there plans to utilize applications that require additional databases?
Are there plans to improve the database server hardware?
What cost variations, such as a decrease in the cost of servers and storage devices, will
you see because of market forces?
What data archiving requirements exist? Will these change? Do they differ by department?
What regulatory requirements are in place or are being contemplated? What security
components do they involve?
Are any database servers potential candidates for consolidation?
Are there opportunities to optimize the data-distribution process through simplification
and/or standardization?

CONSIDERING BUSINESS REQUIREMENTS


The nontechnical, business aspects of an organization or enterprise play a major role in determining the shape and scope of any infrastructure system you design. When considering business requirements, you should be aware of any and all budgetary constraints, IT policies,
and industry-specific regulations. Additionally, you should consider the organizations data
security measures and availability needs and requirements:

REF

Lesson 4 covers database


security. Lessons 5 through
7 cover other securityrelated issues.

Budgetary constraints. The amount of money an organization is willing to spend will


obviously affect the sort of database server infrastructure you can design. You can budget
funds for a project in a number of ways, but one of your key roles in the process is to
design within the constraints imposed by the bottom line. An ancillary role you play is
to make business cases when the budget is unrealistic or when spending money now may
produce a better return on investment (ROI) later.
Existing IT policies. Any modifications to a database infrastructure must comply with
existing IT in the organization. Normally, these policies cover the following:
Remote access procedures and rules
Security policies including encryption
Service-level agreements (SLAs)
Standard hardware and software configurations
Regulatory requirements. With the collection of data come twin demands for greater
privacy and security; at the same time, there are demands for data retention. All these
demands, as youll see throughout this lesson and the rest of the text, translate into
infrastructure-related requirements. For example, the health care and banking industries
have strict privacy requirements that translate into requirements for data security and
auditing and maintaining certain data-specific time periods. These requirements primarily affect disk space storage and archiving needs and design considerations.
Data security. An organizations data security requirements include the following:
Confidentiality agreements with customers
Privacy restrictions
Data encryption needs
External regulations
Data availability. Typically, the overall infrastructures needs are a reflection of the dataavailability requirements applied to individual database servers and then generalized for
the entire infrastructure.
One other availability issue to consider applies only if the planned modification of
the database infrastructure results in a planned or unplanned loss of data availability.
Regardless of whether you anticipate a potential loss of availability across the infrastructure,
its important that you have solutions in place to prevent or minimize the loss.

6 | Lesson 1

This may include placing mission-critical data on a redundant site that can be used as an
emergency backup during infrastructure changes.
Now that youve examined some infrastructure-wide and general considerations for assessing,
planning, and modifying a database server infrastructure, you need to look at the process in
more detail, especially as it relates to specific capacity requirements.

Designing for Capacity Requirements


THE BOTTOM LINE

Your design must meet current and projected data storage requirements. You also must
decide on a horizon when anticipating changes. You can anticipate near-term changes far
better than long-term requirements. Make everyone aware of your forecasting periods.

There are two sources of capacity requirements: the business and technical requirements of
the organization. The technical requirements are dictated by need and availability. You should
also determine the business goals of the organization for which youre developing the database
infrastructure. Without knowing those, you cant analyze or forecast its capacity needs, any
more than you can build the best possible house without knowing what it will be used for.
With those two points in mind, you have two other key tasks to perform: assessing the current capacity of system resources; and identifying any information, such as growth trends,
that you can use to forecast future needs. Most of the time, you can correlate the trends with
a variable that can be measured, such as the database transaction growth rate (the rate at
which the read/write activity on the database server grows.)
In the following sections, youll learn how to gather data about the current capacity of key
system resources such as storage, CPU, memory, and network bandwidth. Then, youll learn
how to use the data to estimate future capacity needs, using that information to design
(or redesign when one exists) a database infrastructure.

Analyzing Storage Requirements


A lot of considerations go into analyzing the storage requirements of a database server. In
addition to the physical size of the database, you need to consider the transaction growth rate
and data-distribution requirements. Some industries, particularly financial and healthcare
institutions, are subject to requirements regarding data retention, storage, and security
that must be taken into account in determining storage capacity. Youll now learn how to
determine the current storage capacity of a database server and identify factors that affect
its growth. Well also look at how to forecast future disk-storage requirements, taking into
account any relevant regulatory requirements that may apply to your business or enterprise.

ASSESSING CURRENT STORAGE CAPACITY


Typically, you wont be starting from scratch when designing a database infrastructure; youll
be reviewing and upgrading an existing system. Even if the recommended upgrade calls for a
complete overhaul of the system, you need to be fully aware of the database servers current
storage capacity. In making your survey and determination of current storage capacity,
consider the following factors:
Disk-space capacity. Establishing disk-space capacity requires several steps. First,
determine how much disk space is used by the database data files. Then, add the
space required for the databases transaction log files, the portion of tempdb that
supports database activity, and the space being used by full-text indexes. Look for any
maintenance operations that may require extra disk space for the database files, such as
index reorganization.

Designing the Hardware and Software Infrastructure | 7

LAB EXERCISE

Perform the exercise in your lab


manual.

If youre examining an existing system, make sure you base your measurement of the
current disk usage on a properly sized database and that adequate disk space is already
allocated for data and log files. If adequate disk space is allocated for these files, SQL
Server doesnt need to dynamically grow the database and request extra disk space from
the operating system. The process of allocating extra disk space for a file uses significant
disk resources. In addition, the process can cause physical file fragmentation because disk
segments are added to an existing file.
Disk throughput capacity. Next, assess the disk I/O rate that the database requires. You
can use System Monitors PhysicalDisk:Disk Read Bytes/sec and the PhysicalDisk:Disk
Write Bytes/sec counters to measure disk I/O. If the database receives mostly reads, also
check the Avg. Disk Read Queue Length counter value, which should be low for optimal performance. If the database primarily receives writes, check the Avg. Disk Write
Queue Length counter value, which should also be low.
Locations and roles of database servers. When youre working with a distributed environment, you should establish where the database servers are (and should be) and their
different roles, because that may require a different disk-capacity assessment for each site
and each server. For example, the servers at an organizations branch offices may store
only a subset of the data that is stored on the main server at headquarters. Based on the
roles of the servers, you may be able to identify which databases are most likely to experience growth in disk-space usage or have particularly high or low disk-space requirements.
In Exercise 1.1, youll learn how to use System Monitor to assess current disk throughput.

Forecasting and Planning Storage Requirements


When youre building a house, you want to design it with an eye on future needs and
plans. The same applies to storage requirements. First, you need an idea of your future
needs. Armed with this information you can begin planning storage requirements. You
need to consider several key factors in planning for the future.

ESTABLISHING THE ESTIMATION PERIOD


Establish at the outset your planning period (in other words, the horizon or the length of
time for which the planning is valid). Are you establishing a plan that should be valid for one
year, two years, five years, or more? Review your assumptions regarding database needs for
the period. Are they valid? Do your estimates of the enterprises future, such as its anticipated
growth, match those of the enterprise? You will need to work with management and other
key stakeholders in the enterprise to determine how long the estimation period is and what
form it will take. Often, they will need to reconcile conflicting ideas to come to a consensus
on what the period should be. Dont be surprised if they turn to you for expert advice and
mediation of internal disputes.
PROJECTING THE GROWTH RATE OF REQUIRED DISK SPACE
Obviously, you must estimate the amount of future disk space required. There are two ways
to do this effectively: You can either base your projection on an existing source of data that
correlates well with growth or you can follow a rule-of-thumb estimate when you dont have a
correlating variable.
Ideally, you should correlate the growth rate with a measurable variable such as increases in
the transaction rate or user load. If you can identify such a variable, you can effectively estimate future growth in disk space. For example, a clear correlation may already exist between
the growth in disk space for key tables and the number of new orders in a day. If so, and if
other variables are insignificant, then you can use the rate of growth in orders to estimate the
rate of growth in disk space required. If you dont have a correlating variable, you can use past
growth trends to estimate future trends. In some cases, historical trends may be the only data
you have for estimation.

8 | Lesson 1

You can make an estimate of future trends using any of a number of formulas where
F
C
T
A
R

Future disk space


Current disk space
Growth-rate time unit
Growth amount
Rate of growth

Linear growth: If you expect disk space to grow by a constant amount in a specific period,
the growth is linear. In that case, you can apply the following formula:
F C (A T)
For example, if you have an 800 GB database thats expected to grow 100 GB per year, in
four years the database is expected to be 1200 GB: 800 GB (100 GB 4) 1200 GB
Compound growth: If you expect disk space to grow at a constant rate during a specific
period (for example, at a certain percentage per month or per quarter), that growth is
described as compound. In that instance, use the following formula:
F C (1 R)^T
For example, if an 800 GB database is expected to grow by 3 percent per quarter for two
years, the resulting additional disk space required in eight quarters (two years) will be
1013 GB: 800 (1 .03)^8 1013 GB. The total database size would be 1813 GB:
800 GB 1013 GB 1813 GB.

TAKE NOTE

In this type of formula, you should express the growth rate as a decimal translation of the
percentage value. For example, if the growth rate is 3 percent per quarter, use the value
0.03 in the formula. In this example, the number of periods is specified in quarters so that
its consistent with the growth-rate unit.
Geometric growth: If the disk space is expected to grow periodically by some increment,
but the increment itself is also growing at a constant rate, the disk space requirement grows
geometrically. In this case, use the sum of a series formula (also called a geometric series) to
determine the projected size:
F C ((Initial Increment) (1 Increment Growth Rate)^(T 1))) / (1 R))

LAB EXERCISE

Perform the exercise in your lab


manual.

For example, if a 1000 GB database grows by an increment that starts at 3 GB per month
and increases at 2 percent per month, then in 24 months the total disk space required will
grow to 1096 GB: 1000 (3 (1 1.02^(24 1))) / (1 1.02) 1096 GB.
In Exercise 1.2, youll try your hand at forecasting future disk-storage requirements.

UNDERSTANDING THE IMPACT OF REGULATORY REQUIREMENTS


Legal considerations and/or governmental and financial regulations, such as those for banks
and the health care industry, can affect how long you need to retain data and how secure it
must be. Both of these factors in turn affect infrastructure design not only in terms of security
but also in terms of storage requirements:
Longevity. Regulations may specify the length of time for which data must be
maintained. Something as simple as immunization records, for example, needs to be
retained for 20 years. Banks may also be required to store certain types of customer data
for a specific number of years. Before estimating disk-space capacity, determine what
data must be available online. For any data you consider storing offline, assess how
quickly the data must be available for online access. Also consider the type of offline
media that will be used. Technology changes quickly and being able to read 20-year-old
media could be an issue. Tapes deteriorate with time and even if backup tapes are in

Designing the Hardware and Software Infrastructure | 9

perfect condition, a functional tape drive of the correct type would be needed in order
to read these tapes.
Privacy/Security. Regulations, industry guidelines, or legislation may mandate that
security measures, including encryption, be undertaken to protect consumer data. For
example, health insurers may be required to ensure the privacy of data. Such regulations
affect the data distribution strategy and, consequently, the disk-space capacity of local
and remote servers.
Privacy-related regulations may require data to be stored in an encrypted format. In SQL
Server, you can store data in an encrypted format by using several encryption algorithms.
However, encrypted data requires more disk space than nonencrypted data. In addition,
encryption increases CPU usage.

Analyzing Network Requirements


All database administrators and infrastructure designers should have a nuts-and-bolts
understanding of the topology and capacity of the network supporting the database
servers because this impacts infrastructural decisions. Available bandwidth, for example,
plays a large part in determining the backup strategies you use and the types of database
services you implement. In the following sections, youll learn how to identify the database components of the network topology. Youll also look at factors you should consider
when analyzing the current network traffic. Finally, youll learn how to estimate future
network-bandwidth requirements.

IDENTIFYING THE PARTS OF THE NETWORK THAT MAY AFFECT


DATABASE TRAFFIC
Obtain or create a network diagram to identify the parts of the network that
Deliver replicated data to other servers
Back up files to network devices
Provide data to client applications
Identify and assess the location of the following items to determine weak points and potential
bottlenecks in the topology, such as low-bandwidth wide-area network (WAN) links. Also be
aware of the security aspects of the network and the impact they have on traffic:
Local and remote connections between database servers
Firewalls
Antivirus applications
Assess capabilities of the database servers on the network by gathering the following
information about each:

LAB EXERCISE

Perform the exercise in your lab


manual.

Number of SQL Server instances


Instance names
Installed SQL services
Network protocols

In Exercise 1.3, youll use SQL Server Configuration Manager to gather information about
database servers.

ANALYZING CURRENT DATABASE NETWORK TRAFFIC


You should analyze current database network traffic to estimate whether, and how long, the
existing network can support your database server infrastructure. If the network cant effectively handle current or future increases in traffic as a result of business growth, the performance of the database servers on the network will be adversely affected by traffic bottlenecks.

10 | Lesson 1

Analyze the traffic between servers and between clients and servers. Then, use the data you
gather to identify potential bottlenecks. Key areas to review include:

TAKE NOTE

To use Network Monitor


for monitoring database
traffic across servers, use
the Network Monitor
version included in
System Management
Server (SMS).

Traffic between servers. Use System Monitor counters to analyze the traffic caused by
backup processes, database mirroring, and replication:
Backup processes. The SQLServer: Backup Device:Device Throughput Bytes/sec
counter specifies the number of bytes per second that the backup device currently
supports. You should also review the backup strategy. If the amount of data is large and
available network bandwidth is low, frequent backups to network devices can saturate
the network.
Database mirroring. The SQLServer:Database Mirroring:Bytes Sent/sec and Bytes
Received/sec counters indicate the number of bytes transferred from the principal
server to the mirror server.
Replication. No single set of counters in System Monitor helps you analyze all
replication traffic; hence, what you need to use will depend on the type of replication
being used. In the case of subscribers, for example, you can monitor commands
delivered per second or transactions received per second.
Traffic between clients and servers. Among other things, you must determine the
client traffic on the network, assess how well the current network supports the user
load, and identify the additional traffic that will be caused by an increase in user load
or changes in the application. A useful technique is to use the System Monitor counter
Network Interface: Bytes Totals/sec counter to establish the number of bytes transferred
across the database servers network interface. You need to do this for each network
interface on the server. Check for a correlation exists between the Network Interface:
Bytes Total/sec counter and the SQLServer:General Statistics:User connection counter.
By doing this, you can determine the network traffic caused by users.
Potential bottlenecks. Running Network Monitor on the database server lets you determine the number of bytes used, the percentage of the total bandwidth used, and the
number of bytes transmitted in a specific period. You can also filter specific patterns and
protocols for a more granular approach. Analyze this data to identify bottlenecks, and
work with the network administrator on strategies to eliminate the bottlenecks.

FORECASTING AND PLANNING NETWORK REQUIREMENTS


Now that you have begun to grasp the present network situation, you should take time to
forecast network traffic growth for database servers. As with all forecasting, there are no hard
and fast rules (nor guarantees of being 100 percent accuratejust ask a weatherman), but
you should do the following:
Make growth estimates for each network type. Because different network types
support varying data flows and volumes, its essential that you assess each network type
individually when determining growth estimates. A good tool is the network diagram
mentioned earlier in this section. Use it to help determine expected network traffic that
each segment may need to support.
Establish a baseline, and study the trends. Before you can even think about estimating
future network traffic growth, you need to establish a baseline of network usage. As your
network grows, you should use that baseline, and others collected at intervals, to determine changes in usage from the baseline(s) and identify growth trends.
Understand specific business needs and the expected workload for the estimation
period. Although understanding the technical aspects is a key to understanding design
of the network, its important to keep in mind the activity trends and growth projections
for the enterprise. Review these business plans, and then estimate future usage and determine the network configuration that is required to support the plans. You should also
make sure you confirm the plans and gather statistics periodically so that you can detect
new trends early and adjust your estimates.

Designing the Hardware and Software Infrastructure | 11

Analyzing CPU Requirements


The CPU is the heart of a computer and the heart of your database server infrastructure.
Youll now learn about what you need to take into account when analyzing the CPU
performance of a database server and when choosing a processor type. Well also look at
what you should do to make meaningful estimates of future CPU requirements.

Youll review considerations for choosing a CPU, such as the performance-versus-cost benefit
of using processors with 32-bit and 64-bit architectures, and of using processors with multicore and hyperthreading technologies, later in this lesson.

TAKE NOTE

ASSESSING CURRENT CPU PERFORMANCE


When youre analyzing the current CPU performance of a database server, consider the
following factors:

Dynamic affinity is
tightly constrained by
per-CPU licensing.
When you set the
affinity mask, SQL
Server verifies that the
settings dont violate the
licensing policy.

Type of CPUs. Identify the database servers in the system. For each, make a list of
its current CPU, speed, architecture (32-bit or 64-bit), and whether the processor is
multicore or capable of hyperthreading.
Affinity mask settings. By default, each thread allocated by a SQL Server instance is
scheduled to use the next available CPU. However, you can set the affinity mask to restrict
an instance to a specific subset of CPUs. Additionally, setting the affinity mask ensures that
each thread always uses the same CPU between interrupts. This reduces the swapping of a
thread among multiple CPUs and increases the cache-hit ratio on the second-level cache.
Current CPU usage. To identify any CPU performance issues, you should set a baseline of CPU usage in the current environment. To do so, first collect basic operations
data, such as the number of user connections and the amount of application data. Next,
establish the current CPU usage using monitoring tools such as System Monitor. Finally,
correlate the operations data with the CPU usage.
Hardware bottlenecks, recompilation of stored procedures, and the use of cursors
are some of the main causes of a decrease in CPU performance. To identify CPU
performance problems, use the counters that are included in System Monitors
SQLServer: Plan Cache and SQLServer:SQL Statistics objects.

FORECASTING AND PLANNING CPU REQUIREMENTS


Once you have a good understanding of the current CPU situation in your environment, you
should next assess and forecast future CPU needs in order to ensure the efficient and effective
operation of your database server infrastructure.
There are always a variety of indicators and factors to review, but the following are critical:
Determine the estimation period. By now you may be tired of looking at this
consideration, but you must keep this in mind in order to limit the scope of your
forecast activities to a realistic level.
Establish a baseline of CPU usage by using historical data. When analyzing data on
CPU usage, consider the type of work the SQL Server instance performs.
Identify factors that affect CPU usage. These factors obviously include the number
of users and the amount of application data on the servers. You should also observe
variations over time and try to find a correlation with measurable variables.
Confirm your estimates by performing load tests and by using sizing tools. Keep in
mind that adding CPUs to a server doesnt necessarily increase the overall CPU power
in a linear proportion.

TAKE NOTE

Some operating system licenses restrict the number of CPUs that may be used. Planning to
use more CPUs must take any such limitations into account.

12 | Lesson 1

Analyzing Memory Requirements


If the CPU is the heart of the computer, then memory is a combination of muscle,
sinew, and brainpower. As a general rule, when assessing the memory requirements of a
database server, you need to determine the amount of memory being used by the OS,
other processes on the server, and SQL Server. Its also important to bear in mind that
memory usage is affected by the type of CPU. In the following sections, youll learn about
determining the current memory usage of a database server, the interaction between
memory usage and CPU type, and how to estimate future memory requirements.

ASSESSING CURRENT MEMORY USAGE


Assessing current memory usage isnt that difficult thanks to a number of tools, the most
important of which is System Monitor. To determine current memory usage and assess the
ability to satisfactorily meet the needs of the current environment, do the following:
Establish how much physical memory is installed on the database server.
In addition to the OS and SQL Server, determine what other processes will be making
use of the available memory.
Use these System Monitor counters to determine how much memory is available and
used on a server:
Memory:Available Bytes indicates how many bytes of memory are available.
Memory:Pages/sec specifies how many pages must be read from the disk or written
to the disk to resolve page faults.
SQLServer:Memory Manager:Total Server Memory determines the amount of
physical memory used by each instance of SQL Server.
Process:Working Set describes the set of memory pages that have been recently
accessed by the threads running in the process and can be used to determine how
much memory SQL Server is using.
SQLServer:Buffer Manager specifies the buffer cache-hit ratio. This counter identifies the percentage of pages that were found in the buffer pool without reading the
disk. The value of this counter should be over 90 percent. High values indicate good
cache usage and minimal disk access when searching for data.
SQLServer:Buffer Manager:Page Life Expectancy specifies the average time spent
by a data page in the data cache. A value of less than 300 seconds indicates that SQL
Server needs more memory.
In addition to the System Monitor tool, you can use the following dynamic management views to collect data about SQL Server memory:
sys.dm_exec_query_stats provides statistics on memory and CPU usage for a
specific query.
sys.dm_exec_cached_plans returns a list of the query plans that are currently cached
in memory.
sys.dm_os_memory_objects provides information about object types in memory,
such as MEMOBJ_COMPILE_ADHOC and MEMOBJ_STATEMENT.
sys.dm_os_memory_clerks returns the set of all memory clerks (memory clerks
access memory node interfaces to allocate memory) that are currently active in the
instance of SQL Server.

TAKE NOTE

When youre trying to establish actual memory usage and peak usage, make sure you
collect information over a complete business cycle in order to obtain the most accurate
data. For example, if an organization generates a large number of reports the first week
of each month, collect the peak usage data when those reports are generated.

Designing the Hardware and Software Infrastructure | 13

LAB EXERCISE

Perform the exercise in your lab


manual.

TAKE NOTE

Microsoft has
replaced the Microsoft
Operations Manager
(MOM) product
with System Center
Operations Manager
(SCOM).

Determine whether the current database and memory size match is correct. If the
database, including its indexes, fits completely into the available memory, there will be no
page faults. When a large database cant fit in memory, some data must be retrieved from
the disk when required. Page faults can be minimized by using the buffer cache efficiently.
Determine the amount of memory being used by SQL Server connections.
In Exercise 1.4, youll use System Monitor to assess memory requirements.
The point of all this collecting and reviewing is to identify trouble spots that need to be
addressed in the current configuration or that will play a role in modification and future
growth of the infrastructure.
Consequently, you should track memory usage values regularly and establish a baseline. To
gather data for establishing the baseline, you can use the System Monitor counters for SQL
Server memory usage. When theyre present, you can also use management tools such as
Microsofts System Center Operations Manager (SCOM) to gather data on memory usage
across a set of enterprise database servers.
You should also establish minimum and maximum usage values. Using this data ensures that
the memory usage for the current period doesnt exceed the established limits. If you compare
current memory usage values with the baseline, you can assess whether SQL Server has sufficient memory for normal operations. If memory is insufficient, the database server is said to
be under memory pressure, a circumstance that needs to be addressed sooner rather than later.

FORECASTING AND PLANNING MEMORY REQUIREMENTS


Once you have a good understanding of the current memory requirements, you should next
estimate future memory needs. In order to do so, establish the following information:
Determine the number of SQL Server instances. Ensure that the server has the
optimal number of SQL Server instances. (For more information on instances, see
Lesson 3.) Running too many instances can cause memory pressure. In some cases, such
as in a multiple-instance cluster during a failover, multiple instances may be required.
Estimate database growth. Because the memory needed by a database may grow if the
size of the database and the volume of data that is queried increases, adding new databases to the same SQL Server instance may also create memory pressure.
Specify the number of concurrent users. An increase in the number of user connections may result in a wider range of queries with varied parameters. This increases the
pressure on memory because more data is cached in response to the queries.
Use baseline data. Once you have the baseline data, you can collect current usage information and then compare it to historical data to identify growth trends.
Determine the rate of growth in memory usage. By correlating the rate of growth in
memory usage with a measurable variable, such as user connections, you can estimate
future memory requirements. For example, if you determine the data cache increases by
50 percent for every 500 users you can estimate the additional memory requirements of
the server if the number of users increases by 50.

Specifying Software Versions and Hardware Configurations


THE BOTTOM LINE

Microsoft employs lots of programmers working diligently to change operating systems and
applications. Intel, AMD, and Nvidia work long hours to make improvements. Knowledge
changes. Truth is time dependent. Your challenge is to keep up. This is no mean feat.

In this section of the lesson, you wont learn every single aspect of hardware and software
selection that you should apply. Given the rate of hardware and software change, any specific
recommendations would likely be out of date by the time this textbook reaches the shelves.

14 | Lesson 1

However, youre going to need to make these decisions every step of the way. There are no
hard-and-fast rules, but there are best practices. Apply them, and you cant go wrong.

Following Best Practices


You should apply the following best practices when selecting database server hardware
and software:
Meet or exceed the design requirements. Based on your assessment, the safest and
most effective course is to at least meet design requirements not only for the present but
also for the future. Be wary of technology obsolescence. Often, the best way to do that is
to deliberately select hardware and software that exceed the requirements.
Perform cost-benefit analyses. When choosing hardware or software, always perform
a cost-benefit analysis. The purpose is to ensure that the benefit youll obtain from the
new item is justifiable in terms of increased throughput that offsets the cost. Spending
money on a new high-speed memory bus may be warranted on a server with a heavy
workload because the new hardware will increase the servers performance; but it
wouldnt be appropriate for a small server with little workload, such as a server used to
store archival data.
Choose from approved hardware and software configurations. Most organizations
have a list of approved hardware and software products and configurations that
restricts your choices. One benefit is that the standardization of hardware and
software reduces the complexity of the database server infrastructure and thereby
simplifies maintenance and reduces implementation costs. Therefore, you should
try to make selections that are within the framework of the hardware and software
standards established by your organization. Keep in mind that its better to choose
database-server hardware and software products that have already been successfully
used in a given environment.
Be prepared to justify variations from standards. Although its better to stay with a
standardized setup, it isnt uncommon for standards to lag behind improvements and
upgrades in database server hardware and software products. Consequently, you may need
to make hardware and software choices that vary from the current standards.
Design requirements should supersede standards, but these variations need to be approved by
management. The best ways to justify a variation are to clearly demonstrate that the existing
standards dont allow you to meet specific design requirements and to describe how the
hardware and software products youve proposed meet those requirements.
Bear in mind that just because the variation may be justified doesnt always mean a
variation or standards change will be approved. Variations from existing standards are
frequently rejected, usually for budgetary reasons. If that occurs, you need to identify
alternative hardware and software products that come closest to fulfilling the requirements.

Choosing a Version and Edition of the Operating System


The version and edition of the operating system you use, if not predetermined by
organizational standards, depends on the version of SQL Server you select. Table 1-1
specifies the minimum versions and editions of the operating system required for each
edition of SQL Server 2005. Table 1-2 specifies the equivalent information for each
edition of SQL Server 2008.

Designing the Hardware and Software Infrastructure | 15


Table 1-1
SQL Server 2005 editions and
minimum operating system
versions and editions

TAKE NOTE

Notice the lack of Vista.


Check http://technet.
microsoft.com/en-us/
library/aa905868.aspx.

E DITION

OPERATING SYSTEM VERSION

SQL 2005 Enterprise (64-bit) x64

Windows Server 2003/2008: Standard x64, Enterprise


x64, or Datacenter x64 edition with Service Pack 1
or later

SQL 2005 Standard (64-bit) x64

XP Professional 64 or later; Windows Server 2003/2008:


Standard x64, Enterprise x64, or Datacenter x64 edition
with Service Pack 1 or later

SQL 2005 Developer (64-bit) x64

Windows XP Professional 64 or later; Windows Server


2003/2008: Standard x64, Enterprise x64, or Datacenter
x64 edition with Service Pack 1 or later

SQL 2005 Enterprise (64-bit) IA64

Microsoft Windows Server 2003/2008 Enterprise


Edition or Datacenter edition for Itanium-based
systems with SP 1 or later

SQL 2005 Standard (64-bit) IA64

Microsoft Windows Server 2003/2008 Enterprise


Edition or Datacenter edition for Itanium-based
systems with SP 1 or later

SQL 2005 Developer (64-bit) IA64

Microsoft Windows Server 2003/2008 Enterprise


Edition or Datacenter edition for Itanium-based
systems with SP 1 or later

SQL 2005 Enterprise (32-bit)

Windows 2000 Server with Service Pack 4 or later;


Windows Server 2003/2008: Standard, Enterprise,
or Datacenter edition with Service Pack 1 or later;
Windows Small Business Server 2003 with Service
Pack 1 or later; Windows 2000 Professional with
Service Pack 4 or later

SQL 2005 Standard (32-bit)

XP with Service Pack 2 or later; Windows 2000


Server with Service Pack 4 or later; Windows Server
2003/2008: Standard, Enterprise, or Datacenter
edition with Service Pack 1 or later; Windows Small
Business Server 2003 with Service Pack 1 or later;
Windows 2000 Professional with Service Pack 4 or
later

SQL 2005 Workgroup

XP with Service Pack 2 or later; Windows 2000


Server with Service Pack 4 or later; Windows Server
2003/2008: Standard, Enterprise, or Datacenter
edition with Service Pack 1 or later; Windows Small
Business Server 2003 with Service Pack 1 or later;
Windows 2000 Professional with Service Pack 4
or later

SQL 2005 Developer (32-bit)

Windows XP with Service Pack 2 or later; Windows


2000 Server with Service Pack 4 or later; Windows
Server 2003/2008: Standard, Enterprise, or
Datacenter edition with Service Pack 1 or later;
Windows Small Business Server 2003 with Service
Pack 1 or later; Windows 2000 Professional with
Service Pack 4 or later

SQL 2005 Express

Windows XP with Service Pack 2 or later; Windows


2000 Server with Service Pack 4 or later; Windows
Server 2003/2008: Standard, Enterprise or Datacenter
edition with Service Pack 1 or later; Windows Small
Business Server 2003 with Service Pack 1 or later

AND

EDITION

16 | Lesson 1
Table 1-2
SQL Server 2008 editions and minimum operating system versions and editions

E DITION

OPERATING SYSTEM VERSION

SQL 2008 Enterprise (64-bit) x64

Windows Server 2003 SP2 64-bit x64 Standard


Windows Server 2003 SP2 64-bit x64 Datacenter
Windows Server 2003 SP2 64-bit x64 Enterprise
Windows Server 2008 64-bit x64 Standard
Windows Server 2008 64-bit x64 Datacenter
Windows Server 2008 64-bit x64 Enterprise

SQL 2008 Standard (64-bit) x64

Windows XP Professional x64


Windows Server 2003 SP2 64-bit x64 Standard
Windows Server 2003 SP2 64-bit x64 Datacenter
Windows Server 2003 SP2 64-bit x64 Enterprise
Windows Vista Ultimate x64
Windows Vista Enterprise x64
Windows Vista Business x64
Windows Server 2008 x64 Web
Windows Small Business Server 2008
Windows Server 2008 for Windows Essential Server Solutions

SQL 2008 Developer (64-bit) x64

Windows XP x64 Professional


Windows Server 2003 SP2 64-bit x64 Standard
Windows Server 2003 SP2 64-bit x64 Datacenter
Windows Server 2003 SP2 64-bit x64 Enterprise
Windows Vista Ultimate x64
Windows Vista Home Premium x64
Windows Vista Home Basic x64
Windows Vista Enterprise x64
Windows Vista Business x64
Windows Server 2008 x64 Web

SQL 2008 Workgroup (64-bit) x64

Windows XP x64 Professional


Windows Server 2003 SP2 64-bit x64 Standard
Windows Server 2003 SP2 64-bit x64 Datacenter
Windows Server 2003 SP2 64-bit x64 Enterprise
Windows Vista Ultimate x64
Windows Vista Home Premium x64
Windows Vista Home Basic x64
Windows Vista Enterprise x64
Windows Vista Business x64
Windows Server 2008 x64 Web

SQL 2008 Web (64-bit) x64

Windows XP x64 Professional


Windows Server 2003 SP2 64-bit x64 Standard
Windows Server 2003 SP2 64-bit x64 Datacenter
Windows Server 2003 SP2 64-bit x64 Enterprise
Windows Vista Ultimate x64
Windows Vista Enterprise x64
Windows Vista Business x64
Windows Server 2008 x64 Web

AND

EDITION

Designing the Hardware and Software Infrastructure | 17

E DITION
SQL 2008 Developer (64-bit) IA64

OPERATING SYSTEM VERSION

AND

EDITION

Windows Server 2003 SP2 64-bit Itanium Datacenter


Windows Server 2003 SP2 64-bit Itanium Enterprise
Windows Server 2008 64-bit Itanium Edition

SQL 2008 Enterprise (32-bit)

Windows Server 2003 SP2 Standard


Windows Server 2003 SP2 Enterprise
Windows Server 2003 SP2 Datacenter
Windows Server 2003 Small Business Server SP2 Standard
Windows Server 2003 Small Business Server SP2 Premium
Windows Server 2003 SP2 64-bit x64 Standard
Windows Server 2003 SP2 64-bit x64 Datacenter
Windows Server 2003 SP2 64-bit x64 Enterprise
Windows Server 2008 Standard
Windows Server 2008 Web
Windows Server 2008 Datacenter
Windows Server 2008 Enterprise
Windows Server 2008 x64 Standard
Windows Server 2008 x64 Datacenter
Windows Server 2008 x64 Enterprise

SQL 2008 Standard (32-bit)

Windows XP Professional SP2


Windows XP SP2 Tablet
Windows XP x64 Professional
Windows XP Media Center
Windows XP Professional Reduced Media
Windows Server 2003 SP2 Small Business Server R2 Standard
Windows Server 2003 SP2 Small Business Server R2 Premium
Windows Server 2003 SP2 Standard
Windows Server 2003 SP2 Enterprise
Windows Server 2003 SP2 Datacenter
Windows Server 2003 SP2 Small Business Server Standard
Windows Server 2003 SP2 Small Business Server Premium
Windows Server 2003 SP2 64-bit x64 Standard
Windows Server 2003 SP2 64-bit x64 Datacenter
Windows Server 2003 SP2 64-bit x64 Enterprise
Windows Vista Ultimate
Windows Vista Enterprise
Windows Vista Business
Windows Vista Ultimate x64
Windows Vista Enterprise x64
Windows Vista Business x64
Windows Server 2008 Web
Windows Server 2008 Standard Server
Windows Server 2008 Datacenter
Windows Server 2008 Enterprise
Windows Server 2008 x64 Standard
Windows Server 2008 x64 Datacenter
Windows Server 2008 x64 Enterprise
Windows Small Business Server 2008
Windows Server 2008 for Windows Essential Server Solutions
(Continued )

18 | Lesson 1
Table 1-2
SQL Server 2008 editions and minimum operating system versions and editions (Continued )

E DITION

OPERATING SYSTEM VERSION

SQL 2008 Express x64 (64-bit)

Windows Server 2003 x64


Windows Server 2003 SP2 64-bit x64 Standard
Windows Server 2003 SP2 64-bit x64 Datacenter
Windows Server 2003 SP2 64-bit x64 Enterprise
Windows Vista Ultimate x64
Windows Vista Home Premium x64
Windows Vista Home Basic x64
Windows Vista Enterprise x64
Windows Vista Business x64
Windows Server 2008 64-bit x64 Web
Windows Server 2008 64-bit x64 Standard
Windows Server 2008 64-bit x64 Datacenter
Windows Server 2008 64-bit x64 Enterprise

SQL 2008 Developer (32-bit)

Windows XP Home Edition SP2


Windows XP Professional SP2
Windows XP Tablet SP2
Windows XP Professional x64 SP21
Windows XP Media Center
Windows XP Professional Reduced Media
Windows XP Home Edition Reduced Media
Windows Server 2003 SP2 Standard
Windows Server 2003 SP2 Enterprise
Windows Server 2003 SP2 Datacenter
Windows Server 2003 SP2 Small Business Server Standard
Windows Server 2003 SP2 Small Business Server Premium
Windows Server 2003 SP2 64-bit x64 Standard
Windows Server 2003 SP2 64-bit x64 Datacenter
Windows Server 2003 SP2 64-bit x64 Enterprise
Windows Vista Ultimate
Windows Vista Home Premium
Windows Vista Home Basic
Windows Vista Starter Edition
Windows Vista Enterprise
Windows Vista Business
Windows Vista Ultimate 64-bit x64
Windows Vista Home Premium 64-bit x64
Windows Vista Home Basic 64-bit x64
Windows Vista Enterprise 64-bit x64
Windows Vista Business 64-bit x64
Windows Server 2008 Web
Windows Server 2008 Standard
Windows Server 2008 Enterprise
Windows Server 2008 Datacenter
Windows Server 2008 64-bit x64 Standard
Windows Server 2008 Datacenter
Windows Server 2008 Enterprise

AND

EDITION

Designing the Hardware and Software Infrastructure | 19

E DITION

OPERATING SYSTEM VERSION

SQL 2008 Workgroup (32-bit)

Windows XP Professional SP2


Windows XP SP2 Tablet
Windows XP Professional 64-bit x64
Windows XP SP2 Media Center 2002
Windows XP SP2 Media Center 2004
Windows XP Media Center 2005
Windows XP Professional Reduced Media
Windows Server 2003 SP2 Standard
Windows Server 2003 SP2 Enterprise
Windows Server 2003 SP2 Datacenter
Windows Server 2003 SP2 Small Business Server Standard
Windows Server 2003 SP2 Small Business Server Premium
Windows Server 2003 64-bit x64 Standard
Windows Server 2003 64-bit x64 Datacenter
Windows Server 2003 64-bit x64 Enterprise
Windows Vista Ultimate
Windows Vista Enterprise
Windows Vista Business
Windows Vista 64-bit x64 Ultimate
Windows Vista 64-bit x64 Enterprise
Windows Vista 64-bit x64 Business
Windows Server 2008 Web
Windows Server 2008 Standard
Windows Server 2008 Datacenter
Windows Server 2008 Enterprise
Windows Server 2008 64-bit x64 Standard
Windows Server 2008 64-bit x64 Datacenter
Windows Server 2008 64-bit x64 Enterprise

SQL 2008 Web (32-bit)

Windows XP Professional XP x64


Windows XP Media Center
Windows XP Professional Reduced Media
Windows Server 2003 SP2 Standard
Windows Server 2003 SP2 Enterprise
Windows Server 2003 SP2 Datacenter
Windows Server 2003 SP2 Small Business Server Standard
Windows Server 2003 SP2 Small Business Server Premium
Windows Server 2003 SP2 64-bit x64 Standard
Windows Server 2003 SP2 64-bit x64 Datacenter
Windows Server 2003 SP2 64-bit x64 Enterprise
Windows Vista Ultimate
Windows Vista Enterprise
Windows Vista Business
Windows Vista Ultimate x64
Windows Vista Enterprise x64
Windows Vista Business x64
Windows Server 2008 Web
Windows Server 2008 Standard Server
Windows Server 2008 Datacenter
Windows Server 2008 Enterprise
Windows Server 2008 x64 Standard
Windows Server 2008 x64 Datacenter
Windows Server 2008 x64 Enterprise

AND

EDITION

(Continued )

20 | Lesson 1
Table 1-2
SQL Server 2008 editions and minimum operating system versions and editions (Continued )

E DITION

OPERATING SYSTEM VERSION

SQL 2008 Express (32-bit), Express with


Tools, and Express with Advanced Services
(32-bit)

Windows XP SP2 Home


Windows XP SP2 Professional
Windows XP SP2 Tablet
Windows XP Media Center
Windows Server 2003 Reduced Media
Windows XP Home Edition Reduced Media
Windows Server 2003 SP2 Standard
Windows Server 2003 SP2 Enterprise
Windows Server 2003 SP2 Datacenter
Windows Server 2003 SP2 Web Edition
Windows Server 2003 SP2 Small Business Server Standard
Windows Server 2003 SP2 Small Business Server Premium
Windows Server 2003 SP2 64-bit x64 Standard
Windows Server 2003 SP2 64-bit x64 Datacenter
Windows Server 2003 SP2 64-bit x64 Enterprise
Windows Vista Ultimate
Windows Vista Home Premium
Windows Vista Home Basic
Windows Vista Enterprise
Windows Vista Business
Windows Vista Ultimate 64-bit x64
Windows Vista Home Premium 64-bit x64
Windows Vista Home Basic 64-bit x64
Windows Vista Enterprise 64-bit x64
Windows Vista Business 64-bit x64
Windows Server 2008 Standard Server
Windows Server 2008 Enterprise
Windows Server 2008 Datacenter
Windows Server 2008 Web Edition
Windows Server 2008 64-bit x64 Web Edition
Windows Server 2008 64-bit x64 Standard
Windows Server 2008 64-bit x64 Datacenter
Windows Server 2008 64-bit x64 Enterprise
Windows XP Embedded SP2 feature pack 2007
Windows Embedded for Point of Service SP2

CERTIFICATION READY?
Know the different
editions and their
requirements and
limitations.

TAKE NOTE

AND

EDITION

Choosing an Edition of SQL Server


Because SQL Server is used by a vast audience of different people, businesses, school,
government agencies, and so on, each of which has different needs as well as diverse
requirements, different editions of SQL Server have been provided by Microsoft. SQL
Server 2005 and 2008 each come in different editions. Each edition targets a group of
people based on creating a good match to the unique performance, runtime, and price
requirements of organizations and individuals.

Both SQL Server 2005 and SQL Server 2008 have a specialized edition for embedded
applications (such as handheld devices). This embedded edition is the Compact edition
and because of its very specialized nature, we will not discuss it further in this text.

Designing the Hardware and Software Infrastructure | 21

SQL SERVER 2005


There are five different editions of SQL Server 2005: Microsoft SQL Server 2005 Enterprise/
Developer/Evaluation edition, Microsoft SQL Server 2005 Standard edition, Microsoft
SQL Server 2005 Workgroup edition, Microsoft SQL Server 2005 Developer edition, and
Microsoft SQL Server 2005 Express edition/Express edition with Advanced Services. The
most common editions used are Enterprise, Standard, and Express, because these editions fit
the requirements and product pricing needed in production server environments:
SQL Server 2005 Enterprise edition (32-bit and 64-bit). This edition comes in both
32-bit and 64-bit varieties. This is the ideal choice if you need a SQL Server 2005
edition that can scale to near limitless size while supporting enterprise-sized On-Line
Transaction Processing (OLTP), highly complex data analysis, data-warehousing systems,
and Web sites.
Enterprise edition has all the bells and whistles and is suited to provide comprehensive
business intelligence and analytics capabilities. It includes high-availability features such
as failover clustering and database mirroring. Its ideal for large organizations or situations
with the need for a version of SQL Server 2005 that can handle complex situations.
SQL Server 2005 Standard edition (32-bit and 64-bit). Standard includes the
essential functionality needed for e-commerce, data warehousing, and line-of-business
solutions but does not include some advanced features such as Advanced Data
Transforms, Data-Driven Subscriptions, and DataFlow Integration using Integration
Services. The Standard edition is best suited for the small- to medium-sized organization
that needs a complete data-management and analysis platform without many of the
advanced features found in the Enterprise edition.
SQL Server 2005 Workgroup edition (32-bit only). Workgroup edition is the data
management solution for small organizations that need a database with no limits on size
or number of users. It includes only the core database features of the product line (it
doesnt include Analysis Services or Integration Services, for example). Its intended as
an entry-level, easy-to-manage database.
SQL Server 2005 Developer edition (32-bit and 64-bit). Developer edition has all
the features of Enterprise edition, but its licensed only for use as a development and test
system, not as a production server. This edition is a good choice for persons or organizations that build and test applications but dont want to pay for Enterprise edition.
SQL Server 2005 Express edition (32-bit only). SQL Server Express is a free,
easy-to-use, simple-to-manage database without many of the features of other editions
(such as Notification Services, Analysis Services, Integration Services, and Report
Builder). SQL Server Express is free and can function as the client database as well as a
basic server database. Its a good option if all thats needed is a stripped-down version of
SQL Server 2005. Express is used typically among low-end server users, nonprofessional
developers building web applications, and hobbyists building client applications.
SQL SERVER 2008
With SQL Server 2008, Microsoft is now bundling both the 32-bit and 64-bit software
together with one license for all editions of the product. This in itself is a significant
change SQL Server 2005, where you could only purchase or obtain 32-bit software for the
Workgroup and Express editions. Also a new Web edition has been created for Internet web
server usage. The 2008 editions of SQL Server are:
SQL Server 2008 Enterprise edition. The Enterprise edition includes all features in
SQL Server 2008. The following features are only available in this edition of SQL Server
2008 (plus Developer and Evaluation editions as they are simply restricted license versions of Enterprise):
Data Compression
Extensible Key Management
Hot Add CPUs, RAM
Resource Governor

22 | Lesson 1

IA-64 Processor Hardware Support


Table and Index Partitioning
Sparse Columns
Indexed Views
SQL Server 2008 Standard edition. This edition is the second most advanced edition
below only Enterprise edition. There is no limit on database size, no RAM limit other
than the operating system maximum, and up to 4 processors are supported.
SQL Server 2008 Developer edition. Developer edition has all the features of
Enterprise edition, but its licensed only for use as a development and test system, not as
a production server. This edition is a good choice for persons or organizations that build
and test applications but dont want to pay for Enterprise edition.
SQL Server 2008 Workgroup edition. RAM is limited to 4 GB.
SQL Server 2008 Web edition. This edition is designed for Internet-oriented databases
provided by organizations such as hosting companies. There is no license limit to the size
of any database and up to four CPUs may be utilized. Also, there is no maximum limit
on RAM other than the maximum allowed by the operating system on which this edition is installed.
SQL Server 2008 Express edition. The Express edition is free and is available in three
versions. Each of these might be considered subeditions. These versions or subeditions
are regular (runtime only) Express, Express with Tools, and Express with Advanced
Services. Management Studio is only included with the latter two subeditions. In all of
these versions, the core database engine is the same and all of the following limits apply:
all databases are limited to a maximum of 4 GB, only one CPU is permitted, and RAM
is limited to 1 GB.

Choosing a CPU Type


SQL Server supports both 32-bit and 64-bit CPUs. In addition, it supports multicore
CPUs and CPUs that use hyperthreading. When estimating CPU requirements, you
should consider the benefits of using different processor types. The benefits of using a
64-bit CPU instead of a 32-bit CPU include:
Larger direct-addressable memory. A database server running Microsoft Windows
Server 2003/2008 on a 64-bit architecture can support up to the operating system maximum (up to 2 terabytes of memory). In contrast, a Windows server with a 32-bit architecture can directly address a maximum of 3.25 GB of physical memory. The server can
indirectly address memory beyond this limit only if you enable the Address Windowing
Extensions (AWE) switch.
Better on-chip cache management. A 64-bit CPU allows SQL Server memory structures such as the query cache, connection pool, and lock manager to use all available
memory. A 32-bit CPU doesnt.
Enhanced on-processor parallelism. The 64-bit architecture can support 64 processors,
allowing SQL Server to potentially support more concurrent processes, applications, and
users on a single server. A 32-bit architecture can support only 32 processors.
A multicore CPU includes two or more complete execution cores that run at the same
frequency and share the same packaging and interface with the chipset and the memory. In
addition, the cores contain two or more physical processors and two or more L2 cache blocks.
On a Windows system running SQL Server, each core can be used as an independent processor to increase the multithreaded throughput.
Hyperthreading lets a CPU execute multiple threads simultaneously. Consequently, the CPU
throughput increases. A CPU that supports hyperthreading contains two architectural states

Designing the Hardware and Software Infrastructure | 23

on a single physical core. Each state acts as a logical CPU for the operating system. However,
the two logical CPUs use the same execution resources, so you dont get the performance benefits of using two physical CPUs.
In the recent past 64-bit, multicore, and hyperthreading CPUs were more expensive than 32-bit
CPUs. SQL Server 2005 requires a variety of different CPUs depending on edition, as summarized
in Table 1-3. Table 1-4 summarizes the same information for the editions of SQL Server 2008.
Table 1-3
SQL Server 2005 editions and
minimum CPU type and speed

E DITION

OS T YPE

M INIMUM CPU T YPE

Enterprise, Standard,
and Developer

64-bit

1.0 GHz AMD Opteron, AMD Athlon 64,


Intel Xeon with Intel EM64T support, Intel
Pentium IV with EM64T support processor

Enterprise, Standard,
and Developer

IA64

Itanium CPU: 1.0 GHz or faster processor

Enterprise, Standard,
Workgroup, Developer,
and Express

32-bit

32-bit, 600 MHz Pentium III compatible CPU;


1.0 GHz or faster processor recommended

AND

S PEED

Table 1-4
SQL Server 2008 editions and
minimum CPU type and speed

E DITION

OS T YPE

M INIMUM CPU
T YPE AND S PEED

R ECOMMENDED
CPU S PEED

Enterprise, Standard,
Developer, Workgroup,
Web, and Express

64-bit

64-bit CPU, 1.4 GHz

2.0 GHz or faster

Enterprise

IA64

Itanium CPU, 1.0 GHz

1.0 GHz or faster

Developer

IA64

Itanium CPU, 1.0 GHz

Not specified

Enterprise, Standard,
Developer, Workgroup,
Web, and Express

32-bit

Pentium III compatible


CPU, 1.0 GHz

2.0 GHz or faster

Choosing Memory Options


This may sound simplistic, but the best memory option still follows the oldest rule of
thumb: Buy as much RAM as you can and the fastest possible RAM you can that is
appropriate for the system youre installing it on.
Increasing memory often solves what may initially appear to be a CPU bottleneck. Minimum
and recommended RAM requirements for the different editions of SQL Server 2005 are
presented in Table 1-5. Minimum and recommended RAM requirements for SQL Server
2008 editions are presented in Table 1-6.
Table 1-5
SQL Server 2005 editions and
minimum RAM

E DITION

OS T YPE

M INIMUM RAM

R ECOMMENDED RAM

Enterprise, Standard,
Workgroup, and Developer

32-bit, 64-bit
and Itanium

512 MB

1 GB or more

Express

32-bit

192 MB

512 MB or more

24 | Lesson 1
Table 1-6
SQL Server 2008 editions and
minimum RAM

E DITION

OS T YPE

M INIMUM RAM

R ECOMMENDED RAM

Enterprise, Standard,
Developer, Workgroup,
and Web

64-bit

512 MB

2 GB

Express

64-bit

512 MB

1 GB

Enterprise and
Developer

IA64

512 MB

2 GB

Enterprise, Standard,
Developer, Workgroup,
and Web

32-bit

512 MB

2 GB

Express

32-bit

256 MB

1 GB

Determining Storage Requirements


As with memory options, the best rule of thumb is to buy as much hard disk space as you
can afford. Youll learn about physical storage in more detail in the next lesson. All editions
of SQL Server 2005 have effectively the same minimum disk space requirements. All editions
of SQL Server 2008 have higher space requirements than SQL Server 2005. All editions of
SQL Server 2008 have the same disk space requirements. These requirements are:
SQL Server 2005
350 MB of available hard disk space for the recommended installation with approximately
425 MB of additional space for SQL Server Books Online, SQL Server Mobile Books
Online, and sample databases.
SQL Server 2008
CERTIFICATION READY?
When presented with a
scenario where several
solutions seem plausible,
try to determine the one
with the best return on
investment. Database
administration is always
about trade-offs; that
is, choosing the best
option from competing
alternatives. See Lesson 3
for an introduction to ROI.

Microsoft has published disk space requirements for installable software modules within SQL
Server 2008. These are the space requirements:
Database Engine, Replication, Full-Text Search
Analysis Services
Reporting Services
Integration Services
Client Services
Books Online

280 MB
90 MB
120 MB
120 MB
850 MB
240 MB

To determine SQL Server 2008 disk space requirements, first determine the modules to be
installed and then add up the requirements for those modules.

Planning for Hot Add CPUs and RAM


Microsoft has been developing methods to enable the dynamic reconfiguration of hardware
so that servers continue operating while the physical hardware is changed. Underlying the
ability of SQL Server to dynamically handle hardware changes is Windows Server support for
dynamic hardware reconfiguration and of course, the physical server hardware platform must
also support physical hardware changes while the server is running. Most hardware does not
yet support this ability. When designing the hardware infrastructure, a decision should be made
about whether to provide for dynamic hardware changes. If the decision is made to provide for
this possibility, then specific models of server hardware must be chosen to facilitate this potential
situation. Further, specific Windows Server operating system versions must be selected.

Designing the Hardware and Software Infrastructure | 25

Once the hardware and the operating system have been selected to allow for dynamic
hardware changes, then the edition of SQL Server to be used can be selected. Support for
additional RAM is available in SQL Server 2005 and SQL Server 2008. SQL Server 2008
also provides additional support for dynamic additional CPUs or processors. These features
are known as Hot Add RAM and Hot Add CPUs. Despite these names, support also exists
for hardware removal. The RECONFIGURE command must be executed in SQL Server to
implement any such hardware change for SQL Server.
Requirements for Hot Add CPU
Requires special hardware that supports Hot Add CPU.
Requires the 64-bit edition of Windows Server 2008 Datacenter or the Windows Server
2008 Enterprise edition for Itanium-Based Systems operating system.
Requires SQL Server 2008 Enterprise edition.
SQL Server cannot be configured to use soft NUMA.
Requirements for Hot Add RAM
Requires special hardware that supports Hot Add RAM.
Requires SQL Server Enterprise and is only available for 64-bit SQL Server and for
32-bit SQL Server when AWE is enabled. Hot Add Memory is not available for 32-bit
SQL Server when AWE is not enabled.
Requires Windows Server 2003/2008, Enterprise and Datacenter editions.

S K I L L S U M M A RY
In this lesson, you reviewed the factors you need to consider when youre assessing the
capacity requirements of a database server. You studied a variety of methods for collecting
information about current capacity and how to forecast future needs. In addition, you
familiarized yourself with the techniques and skills youll need to achieve a balance between
business and technical requirements. Finally, you learned about the various hardware and
software considerations you should factor into your design plans, including hardware,
operating system, and software versions.
For the certification examination:

Be familiar with System Monitor counters. Know how to use the System Monitor tool and
which counters provide relevant information about system status. Know the techniques of
collecting a baseline, and when and how to use it.

Understand business requirements. Know different business requirements and the


subsystems they impact. Make sure you understand the effect of regulatory requirements
on storage needs.

Know the prerequisites. Know the requirements and limitations for installing the various
editions of SQL Server 2005, including what operating system, how much memory, and
the speed of CPU you need.

Understand the cost-benefit relationship between 32-bit and 64-bit processors. Be familiar
with the advantages and disadvantages of the two processor types.

26 | Lesson 1

Knowledge Assessment
Multiple Choice
Circle the letter or letters that correspond to the best answer or answers.
1. Which of the following factors should be considered when projecting disk-storage
requirements?
a. Forecasted business growth
b. Historical trends
c. Index maintenance space requirements
d. All of the above
2. Which of the following file types should not be considered when determining the
amount of disk space used by the database files?
a. Database files
b. Database paging file
c. Database transaction logs
d. Full-text indexes
3. What can result if improper disk-space allocation causes SQL Server 2005 to
dynamically grow the database by requesting extra disk space from the operating system?
(Choose all that apply.)
a. Truncating of log files
b. Reduced network bandwidth
c. Disk/file fragmentation
d. Processor bottleneck
4. Which of the following are System Monitor counters that can be used to assess disk I/O
rates? (Choose all that apply.)
a. PhysicalDisk:Disk Read Bytes/sec
b. PhysicalDisk:Disk Write Bytes/sec
c. PhysicalDisk:Avg. Disk Queue Length
d. PhysicalDisk:Disk Modify Bytes/sec
e. All of the above
5. The length of time data must be retained is also referred to as what?
a. Lifetime of data
b. Data Retention Period (DRP)
c. Data estimation period
d. Longevity of data
6. In order to start the System Monitor tool, you should type which of the following
commands in the Run text box?
a. perfmon
b. sysmon
c. sysinfo
d. mssysmon
e. mmc
7. If regulatory requirements or internal procedures require the encryption of data, which
subsystems are directly impacted? (Choose all that apply.)
a. Physical storage
b. Memory
c. CPU
d. Network
e. SQL Server version

Designing the Hardware and Software Infrastructure | 27

8. If you calculate future disk-space requirements based on a constant amount in a specified


period, you are calculating what?
a. Linear growth
b. Compound growth
c. Trigonometric growth
d. Geometric growth
e. Incremental growth
9. Which of the following will not affect the CPU performance of a database server?
a. Affinity mask settings
b. Number of connections
c. Available memory
d. Network bandwidth
e. Processor type
10. Affinity masks can be used to do what? (Choose all that apply.)
a. Change the bit speed of a CPU.
b. Restrict a SQL Server instance to a specific subset of CPUs.
c. Ensure that each thread always uses the same CPU between interrupts.
d. Free up RAM that was locked earlier.
e. Restrict CPU operation to specific file types.
11. Which of the following is not a benefit of using a 64-bit CPU?
a. Larger direct-addressable memory
b. Better on-chip cache management
c. Lower cost per chip
d. Enhanced on-processor parallelism
e. None of the above
12. Characteristics of a multicore CPU include which of the following? (Choose all that
apply.)
a. Executes multiple cores simultaneously
b. Includes two or more completion execution cores
c. All cores run at different frequencies
d. Can increase multithreaded throughput in SQL Server 2005
e. Contains two architectural states on each core
13. Which of the following are the most important factors in estimating CPU requirements?
(Choose all that apply.)
a. Establishing a baseline
b. Business plans
c. Historical trends
d. Correlation between CPU usage and a measurable variable
e. Longevity of data
14. Which of the following counters indicates how many bytes of memory are available?
a. Memory:Available Bytes
b. Memory:Pages/sec
c. Memory:Available RAM
d. Memory:Available Pages
e. Memory:Bytes/sec
15. Which of the following dynamic management views can be used to gather data about
memory usage by SQL Server? (Choose all that apply.)
a. sys.dm_exec_query_stats
b. sys.dm_exec_cached_plans
c. sys.dm_os_memory_pages
d. sys.dm_os_memory_objects
e. sys.dm_exec_query_calls

28 | Lesson 1

16. Which one of the following counters is used to determine the amount of memory used
by SQL Server connections?
a. SQLServer:MemoryManager:Total Server Memory
b. SQLServer:MemoryManager:Working Set
c. SQLServer:Buffer Manager:Connection Memory (KB)
d. SQLServer:Page Manager:Connection Memory (KB)
e. SQLServer:Memory Manager:Connection Memory (KB)
17. Which of the following may affect network traffic?
a. Backup schedules
b. Firewalls
c. Antivirus applications
d. Enabled network protocols
e. All of the above
18. Which of the following business requirements should be considered when modifying or
designing a database infrastructure?
a. Budgetary constraints
b. IT policies
c. Data security
d. Data availability
e. All of the above
19. During your survey, you determine that one of the existing database servers has an
800MHz Pentium III processor with 256 MB of RAM and a 400 GB hard drive, running Windows XP. Which version of SQL Server can you install on this machine?
a. Workgroup
b. Standard
c. Enterprise
d. Developer
e. None of the above
20. You want to install SQL Server 2005 Enterprise Edition on a 32-bit CPU machine.
Which operating systems can this machine use? (Choose all that apply.)
a. Windows 2003, Service Pack 1
b. Windows 2000 Professional, Service Pack 3
c. Windows XP, Service Pack 1
d. Windows 2000 Server, Service Pack 5
e. All of the above

Case Study
Examining the Infrastructure
Thylacine Savings & Loan Association is a large financial institution serving
approximately 1.6 million customers over a broad geographic area. The company is
headquartered in the city of Trevallyn, which also serves as northern headquarters,
with 407 employees. Three branch offices are located in Stratford (Eastern operations),
Belleville (Western), and Rock Hill (Southern).
The company currently has a 3 terabyte OLTP database that tracks more than 2 billion
transactions each year. The main database for all transactions and operations is located
in Trevallyn. Regional databases contain deposit/withdrawal information only, and the
headquarters database is updated daily from the regional offices.
Thylacines departmental database servers are dispersed throughout the headquarters
location.

Designing the Hardware and Software Infrastructure | 29

The company is currently experiencing 4 percent annual growth and plans to expand
into four new markets at the rate of one new market every two years. The database is
growing at a rate of 6 percent per year and will exceed available hard disk space in the
future. Additionally, server capacity is overloaded, resulting in poor performance and
long delays. A large portion of the database data is historical information.
After lengthy consideration, Thylacine Savings & Loan has decided to upgrade its
database system to SQL Server 2005 and has hired you as a consultant database project
architect to address the companys current and future needs.
Use the information in the previous case study to answer the following questions.
1.

Briefly summarize the initial steps you should take before beginning capacity
planning.
2. Do you need to consider regulatory factors? If so, describe the impact theyre likely
to have on the various components of the infrastructures capacity.
3. Which would you give greater weight: the observed growth rate of a database or the
projected business growth rate of Thylacine Savings & Loan? Why?

Designing Physical
Storage

LESSON

L E S S O N S K I L L M AT R I X
TECHNOLOGY SKILL

70-443 EXAM OBJECTIVE

Design physical storage.

Foundational

Design transaction log storage.

Foundational

Design backup file storage.

Foundational

Decide where to install the operating system.

Foundational

Decide where to place SQL Server service executables.

Foundational

Specify the number and placement of files to create for each database.

Foundational

Design instances.

Foundational

Decide how many databases to create.

Foundational

Decide on the placement of system databases for each instance.

Foundational

Decide on the physical storage for the tempdb database for each instance.

Foundational

Decide on the number of instances.

Foundational

Decide on the naming of instances.

Foundational

Decide how many physical servers are needed for instances.

Foundational

Establish service requirements.

Foundational

Specify instance configurations.

Foundational

KEY TERMS
extent: A unit of space allocated to
an object. A unit of data input and
output; data is stored or retrieved
from the disk as an extent
(64 kilobytes).
filegroup: A named collection of
one or more data files that forms

a single unit of data allocation or


for administration of a database.
instance: A separate and isolated
copy of SQL Server running on
a server. Application service
providers can support multiple
businesses and their database

needs while guaranteeing that


one business cannot see the
others data.
page: A unit of data storage. Eight
pages comprise an extent.

In the previous lesson, you looked at the various physical requirements and considerations
associated with creating a hardware infrastructure for SQL Server database servers. Its important to remember that although you may tend to look at aspects of the hardware environment as separategrouping them in memory, disk space, network requirements, and so
onall these components are intrinsically tied to each another and interact together.
30

Designing Physical Storage | 31

In this lesson, the focus is how to best design and organize physical storage. As you might guess,
the first thing youll learn is that there is no correct answer or magical formula that will work
every time in addressing this issue, any more than there was for assessing hardware requirements
in Lesson 1. One of the most elegant (and maddening) features of SQL Server is that it requires
the infrastructure designer to consider the interaction of all the components in designing the
optimal solution. Inadequate memory, for example, can have a profound influence on the
effectiveness of even the fastest hard disk.
With that in mind, you must examine how to design physical storage for your databases and
instances. To efficiently manage storage for your databases, you need to understand what
objects take up disk space and how SQL Server stores those objects. In SQL Server 2000, for
example, one simple system table tracks space usage, only two objects consume disk space, and
only three types of pages exist to store user data. This structure is relatively easy to manage, but
it also has its limitations, especially regarding how SQL Server stores and retrieves large object
(LOB) data.
SQL Server 2005 and SQL Server 2008 have an enhanced storage model that expands the
number and types of objects that consume space, gives you more flexible options for storing
variable-length LOB data, and adds functionality to store partitioned data in multiple,
different locations.

Understanding SQL Server Storage Concepts

THE BOTTOM LINE

Disk input/output is measured in extent units (64 kilobytes). Every database has at least
two physical files: one for data entries and one for log entries. In both cases the size on disk
is allocated when you define a database. For data files, empty space is stored until an extent
is written. The files are Indexed Sequential Access Method (ISAM) structures. The Global
Allocation Map, Index Allocation Map, and Page Free Space are referenced dynamically to find
and recover the correct extent. The ISAM technique permits files to grow indefinitely and scale
to as large as you might need because never more than 64 KB is retrieved or written to disk.

If youve ever worked with data without a computer, youve almost certainly noticed that data
takes up a lot of physical space. In a nontechnical environment, that means boxes of paper,
lots of file folders, and a plan to keep them organized. Or it may entail miles of shelving filled
with vast numbers of files. It was this seemingly endless parade of paperwork (usually official
documents) and the practice of tying thick legal documents together with red cloth tape that
led to the phrase cutting through the red tape. The point is that paper data storage consumes physical space.
If youre using SQL Server 2005 to collect and store data for your enterprise, you dont have
quite the same space problem, and its highly unlikely that youll need to rent a warehouse to
hold your files and records. However, youll still have to address the issue of physical storage
and how to design it. Thats what you will learn in the following sections after we cover some
basic concepts.

Understanding Data Files and Transaction Log Files


Just like any data saved on a computer, the databases you create with SQL Server must
be stored on the hard disk. SQL Server uses three types of files to store databases on disk:
primary data files, secondary data files, and transaction log files.

Primary data files with an .mdf extension are the first files created in a database and can contain user-defined objects, such as tables and views, as well as system tables that SQL Server

32 | Lesson 2

requires for keeping track of the database. If the database gets too big and you run out of
room on your first hard disk, you can create secondary data files with an .ndf extension on
separate physical hard disks to give your database more room.
Secondary files can be grouped together into filegroups. Filegroups are logical groupings of
files, meaning that the files can be on any disk in the system and SQL Server will still see
them as belonging together. This grouping capability comes in handy for very large databases
(VLDBs), which are many gigabytes or even terabytes in size.

TAKE NOTE

Primary data files and any other files not specifically assigned to another filegroup belong
to the primary filegroup.
For the purpose of illustration, suppose you have a database that is several hundred gigabytes
in size and contains several tables. Suppose that users read from half of these tables quite a bit
and write to the other half quite a bit. Assuming that you have multiple hard disks, you can
create secondary files on two of your hard disks and put them in a filegroup called READ.
Next, create two more secondary files on different hard disks, and place them in a filegroup
called WRITE. Now, when you want to create a new table that is primarily for reading, you
can specifically instruct SQL Server to place it on the READ filegroup. The WRITE group
will never be touched. You have, to a small degree, load-balanced the system, because some
hard disks are dedicated to reading and others to writing. Using filegroups is more complex
than this in the real world, but you get the picture.
The third type of file is a transaction log file. Transaction log files use an .ldf extension and
dont contain any objects such as tables or views. To understand these files, its best to know a
little about how SQL Server writes data to disk.
Internally when a user initiates a change to some data (either an INSERT, UPDATE, or
DELETE statement), SQL Server first writes the transaction information out to the transaction log file. Following that action, SQL Server then caches the changed data in memory for
a short period of time. This process of updating the log file first is called a write-ahead log. As
changes to data accumulate, and at frequent intervals, SQL Server flushes the cached data by
performing actual writes to the database data file on disk.
Why should I want to do this? you may ask. There are two reasons. The first is speed.
Memory is about 100 times faster than hard disk, so if you pull the data off the disk and
make all the changes in memory, the changes occur about 100 times faster than they would
if you wrote directly to disk. The second reason to use transaction logs is for recoverability.
Suppose you backed up your data last night around 10:00 p.m. and your hard disk containing the data crashed at 11:00 a.m. the next day. You would lose all your changes since last
night at 10:00 p.m. if you wrote to only the data file. Because youve recorded the changes to
the data in the transaction log file (which should be on a separate disk), you can recover all
your data right up to the second of the crash. The transaction log stores transactions in real
time and acts as a sort of incremental backup.
Now, try to imagine the inside of these database files. Imagine what would happen if they had
no order or organizationif SQL Server wrote data wherever it found the space. It would
take forever for SQL Server to find your data when you asked for it, and the entire server
would be slow as a result. To keep this from happening, SQL Server has even smaller levels of
data storage inside your data files that you dont see, called pages and extents.

Understanding Pages
Pages are the smallest unit of storage in a SQL Server data file. Pages are 8,192 bytes
each and start with a 96-byte header. This means that each page can hold 8,096 bytes of
data. There are several different types of pages (not all listed here), each one holding a
different type of data:

Designing Physical Storage | 33

Data. This type of page contains most of the data you enter into your tables. The only
kinds of data entered by users that arent stored in a data page are text and image data
because text and image data are usually large and warrant their own pages.
Text/image. The text, ntext, and image datatypes are designed to hold large objects,
up to 2 GB. Large objects such as pictures and large documents are difficult to retrieve
when theyre stored in a field in one of your tables because SQL Server returns the entire
object when queried for it. To break the large, unwieldy objects into smaller, more manageable chunks, text, ntext, and image datatypes are stored in their own pages. This way,
when you request SQL Server to return an image or a large document, it can return
small chunks of the document at a time rather than the whole thing all at once.
Index. Indexes are used to accelerate data access by keeping a list of all the values in a
single field (or a combination of multiple fields) in the table and associating those values
with a record number. Indexes are stored separately from data in their own page type.
Global Allocation Map. When a table requires more space inside the data file where it
resides, SQL Server doesnt just allocate one page at a time. It allocates eight contiguous
pages, called an extent. The Global Allocation Map (GAM) page type is used to keep
track of which extents are allocated and which are still available.
Index Allocation Map. Although the GAM pages keep track of which extents are in
use, they dont monitor the purpose for which the extents are being used. The Index
Allocation Map (IAM) pages are used to keep track of what an extent is being used
forspecifically, to which table or index the extent has been allocated.
Page Free Space. This isnt an empty page, as the name may suggest. Its a special type
used to keep track of free space on all the other pages in the database. Each Page Free
Space page can monitor the free space on up to 8,000 other pages. That way, SQL
Server knows which pages have free space when new data needs to be inserted.

Understanding Extents
An extent is a collection of eight contiguous pages used to keep the database from
becoming fragmented. Fragmentation means that pages that belong together, usually
belonging to the same table or index, are scattered throughout the database file. To avoid
fragmentation, SQL Server assigns space to tables and indexes in extents. That way, at
least eight of the pages should be physically next to one another, making them easier for
SQL Server to locate. SQL Server uses two types of extents to organize pages: uniform
and mixed.

Uniform extents are those entirely owned by a single object. For example, if a single table
owns all eight pages of an extent, its considered uniform. Mixed extents are used for objects
that are too small to fill eight pages by themselves. In that instance, SQL Server divvies up
the pages in the extent to multiple objects.

TAKE NOTE

Transaction logs arent organized into pages or extents. They contain a list of transactions
that have modified your data, organized on a first-come, first-served basis.

Estimating Database Size


THE BOTTOM LINE

Dynamically adjusting file size proves to be an overhead expensive operation. Try to set file
allocations correctly through prior planning.

34 | Lesson 2

If there werent electronic databases or computers and your job was still to design physical
storage for your data, youd have to know how much data there was, its growth rate, and how
much more there will be. Armed with this information, youd estimate how many shelf feet it
would require (or convert to miles, if appropriate). Then, youd estimate the storage space you
needed and select a warehouse (or more than one) that met your requirements.
When you design a database, youll likely have to estimate how large the database will be
when filled with data. This makes sense when you consider that the old adage waste not, want
not rings true regarding hard-disk space on your SQL Server. Because databases are files that
are stored on your hard disk, you can waste hard-disk space if you make them too big. If you
make your database files too small, SQL Server will have to expand the database file, or you
may need to create a secondary data file to accommodate the extra dataa process that can
slow the system and users.
As youll see in more detail in Lesson 8, estimating the size of a database can also help determine whether the database design needs refining. For example, you may determine that the
estimated size of the database is too large to implement in your organization and that more
normalization is required. Conversely, the estimated size may be smaller than expected. This
would allow you to denormalize the database to improve query performance.

Planning for Capacity


Some complex formulae let you precisely estimate the size of a database, but you can
get a good ballpark estimate of the capacity you need to plan for by asking yourself a
few questions and applying simple arithmetic. The easiest way to estimate the size of a
database is to estimate the size of each table individually and then add those values. The
size of a table depends on whether the table has indexes and, if so, what type of indexes.
Here are the general steps to estimate the size of your database:

LAB EXERCISE

Perform the exercise in your lab


manual.

1. Calculate the record size of the table in question. This may not be easy to do as some
datatypes have variable lengths. For such columns estimate the average size and then sum
the actual or estimated size of each column in the table.
2. Divide 8,096 by the row size from step 1, and round down to the nearest number. The
figure 8,096 is the amount of data a single data page can hold, and you round down
because a row cant be split across pages.
3. Divide the number of rows you expect to have by the result from step 2. This tells you
how many data pages will be used for your table.
4. Multiply the result from step 3 by 8,192the size of a data page in bytes. This tells you
exactly how many bytes your table will take on the disk.
Youll try this process in Exercise 2.1.

Data Compression
SQL Server 2008 includes two new data compression features for reducing the diskspace requirementsrow compression and page compression. Only one type of
compression can be specified at a time on the same object. Compression can be used
on both regular tables and nonclustered indexes. Clustered indexes and views can be
considered as compressed if the data in the table is compressed since views and clustered
indexes are representations of the regular table data. The space savings with these
compression methods will, as with all forms of data compression, depend on the nature
of the data being compressed.

Designing Physical Storage | 35

TAKE NOTE

Data Compression is
only available in the
Enterprise, Developer,
and Evaluation editions
of SQL Server 2008.

TAKE NOTE

Data compressed using


row or page compression should result in
faster backup and
restore times. If Backup
Compression is also
enabled, the potentially
redundant or duplicative
compression may not
show a significant time
savings.

A new stored procedure named sp_estimate_data_compression_savings has been provided


with SQL Server 2008 to provide estimated space savings without having to actually compress
a table first. This stored procedure needs a table or index name and either ROW or PAGE
as a compression method.
Both row-based and page-based compression are enabled via either the CREATE TABLE,
CREATE INDEX, ALTER TABLE, or ALTER INDEX commands. Examples of both row
and page compression are shown:
Row Compression. Compresses all columns in all rows in a table (or index). This type
of compression involves compressing each row individually. Row compression is preferred over page compression when the data to be compressed has a higher percentage of
unique data as compared to repetitive data:
ALTER TABLE mytable REBUILD
WITH (DATA_COMPRESSION = ROW);

Page Compression. Also compresses all columns in all rows in a table; however, the
method of compression spans multiple rows thus involving an entire page of data. Page
compression can be thought of as a higher or further level of compression because when
page compression is specified, row compression is done first, then the additional page
level compression is applied. The purpose of page compression is to reduce the amount
of redundant data stored in a page regardless of which row it is in. Thus page compression is preferred over row compression when the data on a page to be compressed has a
higher percentage of repetitive data as compared to unique data:
ALTER TABLE mytable REBUILD
WITH (DATA_COMPRESSION = PAGE);

Sparse Columns
SQL Server 2008 includes a new storage space savings feature known as Sparse
Columns. Even if a column often has NULL data, space must be allocated for the
column. The algorithm used in assigning space to columns of data can be complex
depending on the datatypes involved. SQL Server disregards the order in which columns
are specified in the CREATE TABLE command and reorganizes the columns that are
defined for the table into a group for fixed-size columns and a group for variable-length
columns. Using a sparse-column option for a fixed-length column potentially alters this
fixed-space allocation. When the majority of the rows in a table have null data for a
particular column, then that column is a probable candidate for use as a sparse column.
Defining a column as sparse can actually increase the space used if the majority of the
rows have data in the column. Sparse columns also require some additional processing
overhead so like most things, using sparse columns is a trade-off and you should use
your best judgment depending on the data.

A considerable number of rules must be followed when using sparse columns. Here is a list to
remember:
Every sparse column must be nullable.
No default value constraint or rule can be applied to a sparse column.
The column options IDENTITY and ROWGUIDCOL cannot be used on a sparse
column.
All datatypes and attributes of datatypes are supported except for GEOGRAPHY,
GEOMETRY, TEXT, NTEXT, IMAGE, TIMESTAMP, VARBINARY (MAX), and
FILESTREAM.
User-defined datatypes cannot be sparse.
A table with one or more sparse columns cannot be compressed.

36 | Lesson 2

TAKE NOTE

A computed column cannot be sparse; however, a sparse column could be used to calculate a computed column.
Sparse columns cannot be used where merge replication is used.
Sparse columns cannot be part of a clustered index nor part of a primary key.

Sparse columns are


only available in the
Enterprise, Developer,
and Evaluation editions
of SQL Server 2008.

A sparse column can be implemented simply by adding the key word SPARSE to the column
definition. The following example shows how this can be done:
CREATE TABLE address(
addressid
INT PRIMARY KEY,
streetinfo1 CHAR(50) NULL,
streetinfo2 CHAR(50) NULL SPARSE,
city_name
CHAR(20) NULL,
statecd
CHAR(2),
zipcode
CHAR(9)
);

REF

Data compression
features are also new in
SQL Server 2008. See
Lesson 2.

TAKE NOTE

Because sparse columns normally should have a high percentage of null-valued rows,
filtered indexes are appropriate for these columns. A filtered index on a sparse column can
index only those rows that have actual values. This results in a smaller and more efficient
index on the column.

Understanding RAID
Placing database files in the appropriate location is highly dependent on the available
hardware and software. There are few hard-and-fast rules when it comes to databases. In
fact, the only definite rule is that of design. The more thoroughly you plan and design
your system, the less work it will be later, which is why its so important to develop a
good capacity plan.

You must keep several issues in mind when youre deciding where to place your database files.
They include planning for growth, communication, fault tolerance, reliability, and speed.
When disks are arranged in certain patterns and use a specific controller, they can be a
Redundant Array of Inexpensive Disks (RAID) set. RAID is one of the best measures you can
take to ensure the reliability and consistency of your database.

REF

Lesson 10 discusses
RAID and its role in
assuring high availability.

For speed and reliability, its better to have more disks, and the faster the better. SCSI-type
disks are typically faster than IDE, although theyre slightly more difficult to configure. SATA
and SAS disks may make excellent choices for both speed and reliability. RAID controllers
typically support only one type of drive interface so your choices for a controller and drives
must match each other. Note further that for non-SCSI drives, different RAID controllers
support different quantities of drives based on the number of connectors. This potential
limitation must be taken into account as well as the physical number of drives that can be
installed in a cabinet.
Several numbers are associated with RAID, but the most common are 0, 1, 5, and 10 as
shown in Table 2-1. Each has its own features and drawbacks.
RAID 0 uses disk stripingit writes data across multiple hard-disk partitions in what is called
a stripe set. This can greatly improve speed because multiple hard disks are working at the
same time. Although RAID 0 gives you the best speed, it doesnt provide any fault tolerance.
If any one of the hard disks in the stripe set is damaged, you lose all your data.
RAID 1, called disk mirroring, writes your information to disk twiceonce to the primary drive
and once to the mirror drive. This gives you excellent fault tolerance, but its fairly slow because
you must write to disk twice to two different hard drives. RAID 1 is optimized for fast reads.

Designing Physical Storage | 37


Table 2-1
RAID Levels

CERTIFICATION READY?
Know the different types
of RAID and what types
are preferred for different
situations.

RAID LEVEL

DESCRIPTION

RAID 0

Supports striping across any number of disks, but doesnt support


mirroring. Doesnt provide fault tolerance.

RAID 1

Supports mirroring only with two disks.

RAID 5

Combines striping with parity information to protect data.


The parity information can be used to reconstruct up to one
failed drive.

RAID 10

Supports striping across mirrored pairs of disks. Because it provides


the fault tolerance of RAID 1 and the performance advantages of
RAID 0, its also known as RAID 10.

RAID 5 requires at least three physical drives and works by writing parts of data across all
drives in the set. Parity checksums are also written across all disks in the stripe set, giving you
excellent fault tolerance because the parity checksums can be used to re-create information
lost if a single disk in the stripe set fails. To understand this, think of a math problem like
3 2 5. Now think of one drive storing the number 3 and the other storing 2, with
5 as the parity checksum. If you remove one of the drives, you can re-create the lost data
by referring back to the other two: For example, x 2 5 means that x 3. However,
if more than one disk in the stripe set fails, youll lose all your data. RAID 5 is often called
stripe set with parity.
RAID 10 is a combination of both RAID 1 and RAID 0. This level of RAID should be used
in mission-critical systems that require uptime 24 hours a day, 7 days a week, and the fastest
possible access. RAID 10 implements striping and then mirrors the stripe sets. You still have
excellent speed and excellent fault tolerance, but you also have the added expense of using
twice the disk space of RAID 0. Because it doesnt store a parity bit, its fast, but it duplicates
the data on four or more drives to be safe. This type of RAID is best for a situation that that
can afford no SQL Server downtime.

TAKE NOTE

This discussion covers hardware RAID. Windows Server operating systems also provide
RAID implemented in software. Generally hardware RAID is faster. With software RAID,
all functions that would be handled by a hardware RAID controller must be handled in
software, which introduces an extra load on the server.

Designing Transaction Log Storage


THE BOTTOM LINE

The write-ahead log ensures data integrity through mishaps such as a power failure. Always
train your data-entry people to check their last submission following a disaster to ensure the
transaction actually survived the catastrophe.
Every SQL Server database has a transaction log that records all transactions and the database
modifications made by each transaction. Think of it as an ongoing collection of everything
that has happened to your databasea diary of database doings.
The transaction log is a critical component of the database, and if there is a system failure,
the transaction log may be required to bring your database back to a consistent state. For that
reason, the transaction log should never be deleted or moved unless you fully understand the
ramifications of doing so.

38 | Lesson 2

The transaction log supports a number of operations:


Recovering individual transactions. If an application issues a ROLLBACK statement,
or if the Database Engine detects an error such as the loss of communication with a
client, the log records are used to roll back the modifications made by an incomplete
transaction.
Recovering all incomplete transactions when SQL Server is started. When an
instance of SQL Server is started, it runs a recovery of each database. Every modification
recorded in the log that may not have been written to the data files is rolled forward.
Every incomplete transaction found in the transaction log is then rolled back to make
sure the integrity of the database is preserved.
Rolling a restored database, file, filegroup, or page forward to the point of failure.
If a hardware or disk failure affecting the database files occurs, you can restore the
database to the point of failure using the transaction log. You first restore the last full
database backup and the last differential database backup, and then you restore the
subsequent sequence of the transaction log backups to the point of failure. When you
restore each log backup, all the modifications recorded in the log roll forward all the
transactions. When the last log backup is restored, SQL Server uses the log information
to roll back all transactions that were not complete at that point.
Supporting transactional replication. The Log Reader Agent monitors the transaction
log of each database configured for transactional replication and copies the transactions
marked for replication from the transaction log into the distribution database.
Supporting standby server solutions. The standby-server solutions, database mirroring, and log shipping, rely heavily on the transaction log. In a log-shipping scenario, the
primary server sends the active transaction log of the primary database to one or more
destinations. Each secondary server restores the log to its local secondary database. In the
case of database mirroring, every update to the principal database is immediately reproduced in a separate, full copy of the database: the mirror database. The principal server
instance sends each log record immediately to the mirror server instance, which applies
the incoming log records to the mirror database, continually rolling it forward.

Managing Transaction Log File Size


The best way to think of a transaction log is as a string of log records. Physically, the
sequence of log records is stored in the set of physical files that implements the transaction log, meaning that the transaction log maps over one or more physical files.
SQL Server divides each physical log file internally into a number of virtual log files. Virtual
log files have no fixed size, and there is no fixed number of virtual log files for a physical log
file. The Database Engine chooses the size of the virtual log files dynamically while its creating or extending log files, and it tries to keep the number of virtual files small. The size of the
virtual files after a log file has been extended is the sum of the size of the existing log and the
size of the new file increment. The size or number of virtual log files cant be configured or set
by administrators.
The transaction log is a wraparound file. To understand what that means, assume you have a
database with one physical log file divided into five virtual log files. When the database was
created, the logical log file began making entries from the start of the physical log file. New
log records have been added at the end of the logical log, and they expand toward the end of
the physical log. When they reach the end of the physical log file, the new log records wrap
around to the start of the physical log file. This cycle repeats endlessly, as long as the end of
the logical log never reaches the beginning of the logical log.

Designing Physical Storage | 39

If the end of the logical log reaches the start of the logical log, then one of two things occurs.
If the FILEGROWTH setting is enabled for the log and space is available on the disk, the file
is increased by the amount specified in the GROWTH_INCREMENT setting and the new
log records are added to the extension. If the FILEGROWTH setting isnt enabled, or the
disk that is holding the log file has less free space than the amount specified in GROWTH_
INCREMENT, a 9002 error is generated.

TAKE NOTE

If the log contains multiple physical log files, the logical log will move through all the
physical log files before it wraps back to the start of the first physical log file.

TRUNCATING THE TRANSACTION LOG


If log records were never deleted from the transaction log, the logical log would grow until it
filled all the available space on the disks holding the physical log files. To reduce the size of
the logical log and free disk space for reuse by the transaction log file, you must truncate the
inactive log periodically.
Transaction logs are divided internally into sections called virtual log files. Virtual log files
are the unit of space that can be reused. Only virtual log files that contain just inactive log
records can be truncated. The active portion of the transaction logthe active logcant be
truncated because the active log is required to recover the database. The most recent checkpoint defines the active log. The log can be truncated up to that checkpoint, and the inactive
portion is marked as reusable.

TAKE NOTE

REF

Lesson 11 discusses
recovery models.

Truncation doesnt reduce the size of a physical log file. Reducing the physical size of a log
file requires shrinking the file.

UNDERSTANDING THE TRUNCATION AND RECOVERY MODEL


The recovery model of a database determines when transaction log truncation occurs (youll
learn more about recovery models in Lesson 11):
Simple recovery model. Transaction log backups arent supported under this model, but
log truncation is automatic. The simple recovery model logs only the minimal information required to ensure database consistency after a system crash or after restoring a data
backup. This minimizes the space requirements of the transaction log space compared
to the other recovery models. To prevent the log from filling, the database requires sufficient log space to cover the possibility of log truncation being delayed.
Full and bulk-logged recovery models. In the full or bulk-logged recovery model, all
log records must be backed up to maintain the log chaina series of log records having
an unbroken sequence of log sequence numbers (LSNs). The inactive portion of the log
cant be truncated until all its log records have been captured in a log backup.
The log will always be truncated when you back up the transaction log, as long as at least one
of the following conditions exists:
The BACKUP LOG statement doesnt specify WITH NO_TRUNCATE or WITH
COPY_ONLY.
A checkpoint has occurred since the log was last backed up.

MONITORING LOG SPACE USE


You can monitor log space use by using DBCC SQLPERF (LOGSPACE) as shown in
Figure 2-1. This command returns information about the amount of log space currently used
and indicates when the transaction log is in need of truncation.

40 | Lesson 2
Figure 2-1
Typical results from DBCC
SQLPERF (LOGSPACE)

To get information about the current size of a log file, its maximum size, and the autogrow
option for the file, you can also use the size, max_size, and growth columns for that log file in
sys.database_files.

LAB EXERCISE

Perform the exercise in your lab


manual.

SHRINKING THE SIZE OF THE LOG FILE


As youve seen, truncating the transaction log is essential because doing so frees disk space
for reuse. However, truncation doesnt reduce the physical size of the log file. To do that, you
need to shrink the log file to remove one or more virtual log files that dont hold any part of
the logical log (that is, inactive virtual log files). When a transaction log file is shrunk, enough
inactive virtual log files are removed from the end of the log file to reduce the log to approximately the target size.
In Exercise 2.2, youll shrink a transaction log file.

ADDING OR ENLARGING A LOG FILE


If you dont want to (or cant) shrink the log file, another way to gain space is to enlarge
the existing log file (if disk space permits) or add a log file to the database, typically on a
different disk.
To add a log file to the database, use the ADD LOG FILE clause of the ALTER DATABASE
statement. Adding a log file allows the log to grow.
To enlarge the log file, use the MODIFY FILE clause of the ALTER DATABASE statement,
specifying the SIZE and MAXSIZE syntax.

UNDERSTANDING TRANSACTION LOG STORAGE


As youve seen, transaction logs and their storage depend on a number of factors, and for the
most part there are no hard-and-fast rules; but you should follow some basic principles when
designing storage.
Normally you should store transaction log files and data files on separate disk volumes. Note
that when using RAID, it is common to have one large RAID array that is logically separated
into multiple volumes. If this is the case, then you may wish to consider implementing
multiple RAID arrays.

Designing Physical Storage | 41

You should reduce the risk of damage to your transaction log by locating it on fault-tolerant
storage. A prudent precaution is to also make multiple copies of log backups by backing up the
log to disk and then copying the disk file to another device, such as a separate disk or tape.

Designing Backup-File Storage


THE BOTTOM LINE

The first line of defense against a disaster is your backup. Do it regularly. Store the media in a
location where, if you have a fire, they wont burn along with your server room.

In SQL Server, youre limited to placing database files on what SQL Server deems to be a
local hard disk. Your local hard disks can be on your local machine or on a hardware device
that is connected directly to the SQL Server machine (such as a hardware RAID array).
Although you have this limitation with your active database files, this rule doesnt apply to
your backups. Backups can be placed anywhere in your enterprise, using named pipe, shared
memory, TCP/IP and VIA protocols on local hard disks, networked hard disks, and tape.

Managing Your Backups


To be able to restore your system when needed, you must manage your backups carefully.
Each backup contains any descriptive text provided when the backup was created, as well
as the backups expiration information. You can use this information to:
Identify a backup.
Determine when you can safely overwrite the backup.
Identify all the backups on a backup mediumtape or disk.
A complete history of all backup and restore operations on the server is stored in the msdb
database. Management Studio uses this history to identify the database backups and any
transaction log backups on the specified backup medium, as well as to create a restore plan.
The restore plan recommends a specific database backup and any subsequent transaction log
backups related to this database backup.
If msdb is restored, any backup history information saved since the last backup of msdb was
created is lost. Hence, you should back up msdb frequently enough to reduce the risk of
losing recent updates.
When youre designing database backup-file storage, there are a number steps and procedures
you should include in your design. The first, and one of the most critical, is to store your backups in a secure placepreferably a secure site removed from the location where the data exists.
Label backup media to avoid accidentally overwriting critical backups. You should also write
expiration dates on backup as protection against inadvertent overwriting. Labeling also allows
for easy identification of the data stored on the backup media or the specific backup set.
You should keep older backups for a designated amount of time, in case the most recent
backup is damaged, destroyed, or lost. When creating backups, consider using RAID 10.
As you recall, with RAID 10, you have a mirrored set of data that you can stripe across several mirrored pairs for additional I/0 throughput. Because backups primarily read from the
database to write out the backup file, the write advantage will be noticeable on disks storing
backup files.
Another worthwhile step is to write to disks that are locally attached instead of writing to
network-attached storage. If the data is being written to direct-attached storage, you can
eliminate factors outside the server that may increase the backup time.

42 | Lesson 2

Maintaining Transaction Log Backups


As you saw in the previous section, if you use the full or bulk-logged recovery models,
making regular backups of your transaction logs is essential to recovering data.
Transaction log backups generally use fewer resources than database backups. As a result, you
can create them more frequently than database backups, reducing your risk of losing data.

TAKE NOTE

A transaction log backup can be larger than a database backup. Suppose, for example, that
you have a database with a high transaction rate. In that case, the transaction log will grow
quickly. The best approach will be to create transaction log backups more frequently.
There are three types of transaction log backups. A pure log backup contains only transaction log records for an interval, without any bulk changes. A bulk log backup includes log
and data pages changed by bulk operations. In this type of backup, point-in-time recovery
isnt allowed. A tail-log backup is taken from a possibly damaged database to capture the log
records that havent yet been backed up. A tail-log backup is taken after a failure in order to
prevent work loss and can contain either pure log or bulk log data.

CERTIFICATION READY?
You remembered to back
up the current log (tail
log) after a catastrophic
data disk failure. Congratulations! When you
restore this log is it
with the /RECOVERY or
/NORECOVERY switch?

Making regular transaction log backups is an essential step in your database design, as youll
see in Lesson 11. As you already learned, in addition to permitting you to restore the backedup transactions, a log backup truncates the log to remove the backed-up log records from
the log file. If you dont back up the log frequently enough, the log files can fill up. If you
lose a log backup, you may not be able to restore the database past the preceding backup.
Therefore, you should store the chain of log backups for a series of database backups.
If your most recent full database backup is unusable, you can restore an earlier full database backup
and then restore all the transaction log backups created since that earlier full database backup.
Because of the crucial role transaction log backups play in restoring a damaged database, you
should make multiple copies of log backups by backing up the log to disk and then copying
the disk file to another device, such as a separate disk or tape.

Backup Compression
SQL Server 2008 includes an easy-to-implement method of incorporating compression
when conducting database backups. Backups can be set to automatically use compression
via a new database option. This new option is set on the Database Settings node of the
Server Properties. This option setting can be overridden by specifying in the BACKUP
command whether or not compression should be performed. When restoring from a
compressed backup, no additional command syntax is necessary as SQL Server 2008
handles the decompression automatically. Be aware however that a compressed backup
would not be readable by an earlier version of SQL Server.

An example of the command syntax for using compression in a backup is shown next:
BACKUP DATABASE AdventureWorks TO DISK =
C:\SQLServerBackups\AdventureWorks.Bak
WITH COMPRESSION

TAKE NOTE

Note that the ability to create backups with compression is only available with the
Enterprise, Developer, and Evaluation editions of SQL Server 2008. All editions of SQL
Server 2008 can restore a compressed backup.

Designing Physical Storage | 43

Deciding Where to Install the Operating System


THE BOTTOM LINE

A database guideline has evolved. Use three spindles: one for the operating system, one for
the data, and one for the log. With SQL Server you may also want one for TempDB. Add
additional drives (spindles) maintaining these three (or four) categories.
To ensure the maximum utilization of resources while enhancing security for your database
server, you should install the operating system files on a spindle separate from data and
applications. In the case of SQL Server, you should install the Windows operating system
on a separate drive with or without SQL executables, where the page file will be. NTFS
5.0, introduced with Windows 2000, supports both file encryption and compression. By
default, these two features are turned off on a newly installed Windows 2000, 2003, or 2008
Server. Although these features do provide some benefits under limited circumstances, they
dont provide any benefits for SQL Server. SQL Server is very I/O intensive, and anything
that increases disk I/O hurts SQL Servers performance, so using either of these features can
greatly hurt performance.
Both file encryption and compression significantly increase disk I/O because data files have
to be manipulated on the fly as theyre used. If either of these settings have been activated by
accident, you should turn off this feature.

TAKE NOTE

SQL Server 2008 introduces compressible rows and pagesslightly different concepts
than the operating system solutionthat may result in faster I/O. Study BOL topics on
Compression Implementation to see if these methods will work for you.

Deciding Where to Place SQL Server Service Executables


THE BOTTOM LINE

Generally, place SQL Server files on the operating system spindle. Generally, change the
owner of services to a domain user and provide that service owner with only the rights and
permissions needed.
Each service in SQL Server represents a process or set of processes. Depending on the
Microsoft SQL Server components you choose to install, SQL Server 2005 Setup installs the
following 10 services:

SQL Server Database Services


SQL Server Agent
Analysis Services
Reporting Services
Notification Services
Integration Services
Full-Text Search
SQL Server Browser
SQL Server Active Directory Helper
SQL Writer

You should install only the services youll be using with SQL Server 2005.

44 | Lesson 2

On all supported operating systems, SQL Server and SQL Server Agent run as Microsoft
Windows services. For SQL Server and SQL Server Agent to run as services in Windows,
SQL Server, and SQL Server Agent must be assigned a Windows user account. Typically,
both SQL Server and SQL Server Agent are assigned the same user accounteither the local
system or a domain user account. However, you can customize the settings for each service
during the installation process.

TAKE NOTE

Program files and data files cant be installed on a removable disk drive, on a file system that
uses compression, or on shared drives on a failover cluster instance.

Specifying the Number and Placement of Files for Each Database


THE BOTTOM LINE

Dynamic tables require backup frequently. Static tables need to be backed up just once.
Analyze your system. Create filegroups to manage your database objects efficiently.

SQL Server maps a database over a set of operating-system files. Data and log information are
never mixed in the same file, and individual files are used by only one database. As explained
earlier, filegroups are named collections of files and are used to help with data placement and
administrative tasks such as backup and restore operations.
SQL Server data and log files can be put on either FAT or NTFS partitions, with NTFS
highly recommended because of its security aspects. Read/write data filegroups and log files
cant be placed on a compressed NTFS file system. Only read-only databases and read-only
secondary filegroups can be put on a compressed NTFS file system.

Setting Up Database Files


All SQL Server databases are composed of three file types:
Primary data files. The primary data file is the starting point of the database and points
to the other files in the database. Every database has one primary data file, typically with
an .mdf extension.
Secondary data files. Secondary data files make up all the data files other than the primary data file. Some databases may not have any secondary data files, whereas others
have several secondary data files. The usual extension for secondary data files is .ndf.
Log files. Log files hold all the log information that is used to recover the database.
There must be at least one log file for each database, although there can be more than
one. The recommended filename extension for log files is .ldf.

TAKE NOTE

Although the .mdf, .ndf, and .ldf filename extensions arent required, its a good idea to use
them because they help you identify the different kinds of files and their use.
The locations of all the files in a database are recorded in the primary file of the database and
in the master database. SQL Server uses the file location information from the master database most of the time. In the following situations, it uses the file location information in the
primary file to initialize the file location entries in the master database:
When attaching a database using the CREATE DATABASE statement with either the
FOR ATTACH or FOR ATTACH_REBUILD_LOG option
When upgrading from SQL Server version 2000 or version 7.0 to SQL Server 2005
When restoring the master database

Designing Physical Storage | 45

Setting Up Filenames
Each SQL Server file has two different names. The logical_file_name is used to refer to
the physical file in all Transact-SQL statements. The logical filename must comply with
the rules for SQL Server identifiers and must be unique among logical filenames in the
database. The OS_file_name is the name of the physical file, including the directory
path. It must follow the rules for operating system filenames.

Setting Up File Size


SQL Server files can grow automatically from their originally specified size. This growth
is in keeping with a growth increment you define when you create the file. Say, for example, that you have a 100 MB file, and you define the growth increment as 10 MB. When
the file reaches 100 MB in size, it automatically grows to 110 MB, at 110 MB it grows
to 120 MB, and so on. If there are multiple files in a filegroup, they wont autogrow until
all the files are full. Growth then occurs in a round-robin fashion. Therefore, you need to
make sure placement of files allows files sufficient room to expand.
Each file can also have a maximum size specified. If you dont specify a maximum size, the file
can continue to grow until it has used all available space on the disk. This feature is especially
useful when SQL Server is used as a database embedded in an application where the user
doesnt have convenient access to a system administrator.

Setting Up Database Filegroups


Database objects and files can be grouped together in filegroups for administration purposes
as well as for allocation. There are two types of filegroups:
TAKE NOTE

Log files are never part


of a filegroup. Log space
is managed separately
from data space.

Primary filegroups. Contains the primary data file and any other files not specifically
assigned to another filegroup. All pages for the system tables are allocated in the primary
filegroup.
User-defined filegroups. Any filegroups that are specified by using the FILEGROUP
keyword in a CREATE DATABASE or ALTER DATABASE statement.
No file can be a member of more than one filegroup. Tables, indexes, and large object data can
be associated with a specified filegroup. In this case, all their pages are allocated in that filegroup, or the tables and indexes can be partitioned. The data of partitioned tables and indexes
is divided into units, each of which can be placed in a separate filegroup in a database.
One filegroup in each database is designated the default filegroup. When a table or index is
created without specifying a filegroup, its assumed that all pages will be allocated from the
default filegroup. Only one filegroup at a time can be the default filegroup. If no default filegroup is specified, the primary filegroup is the default filegroup.

Designing Instances
THE BOTTOM LINE

Instances are isolated from each other. Someone with permission to access one instance
(normally) cannot access another instance.
An instance is a single installation of SQL Server. There are two types of SQL Server instances:
Default instance. The first installation of SQL Server on a machine is the default
instance. It doesnt have a special network name; it works by using the name of the

46 | Lesson 2

computer, just like always. The names of the default services remain MSSQLServer and
SQLServerAgent. If you have older SQL client applications that use only the computer
name, then you can still use those against the default instance. You can have only a
single default instance running at any given time.
Named instance. SQL Server can be installed multiple times (in different directories)
on the same computer. In order to run multiple copies at the same time, a named
instance is installed. With a named instance, the computer name and the name of the
instance are used for the full name of the SQL Server instance. For example, if the
server GARAK has an instance called SECOND, the instance is known by GARAK\
SECOND, and GARAK\SECOND is used to connect to the instance, as shown in
Figure 2-2.
Figure 2-2
Connecting to a named
instance

One of the first decisions you have to make when installing SQL Server 2005 is whether to
use a default or named instance. Use the following guidelines in making your decision:
If youre upgrading from SQL Server 7.0, the upgraded instance must be created as a
default instance.
If you only plan to install a single instance of SQL Server on a database server, it should
be a default instance.
If you must support client connections from SQL Server 7.0 or earlier, its easier to use a
default instance.
TAKE NOTE

You can install instances


any time, even after
a default or another
named instance of SQL
Server is installed.

SQL Server allows you to install a named instance without installing a default instance. This is
useful when you plan to have multiple instances on the same computer because the server can
host only one default instance. You can have a default instance and multiple named instances
on the same server, but its simpler if every instance has the same naming convention.
Any application that installs SQL Server Express edition should install it as a named instance.
Doing so minimizes conflict in situations where multiple applications are installed on the
same computer.

Deciding on the Number of Instances


THE BOTTOM LINE

A single server has only so many CPUs and a certain amount of RAM. Since each instance
requires its fair share of resources, you must continue that balancing act you have been
learning. At what point does performance to the other instances suffer?

Designing Physical Storage | 47

As youve seen, SQL Server supports multiple SQL Server instances on a single server or
processor. Only one instance can be the default instance; all others must be named instances.
A computer can run multiple instances of SQL Server concurrently, and each instance runs
independently of other instances. All instances on a single server or processor must be the
same localized version of SQL Server 2005.
Table 2-2 shows the number of instances supported for each instance-aware component in the
different editions of SQL Server 2005.
Table 2-2
Number of instances per SQL
Server 2005 edition and
component

TAKE NOTE

The maximum number


of instances is reduced
by at least 50 percent
when servers are clustered. Further reduction
may result from other
clustering restrictions.

REF

For additional information, see Lesson 3.

SQL S ERVER
2005 E DITION

D ATABASE E NGINE
I NSTANCES

A NALYSIS S ERVICES
I NSTANCES

R EPORTING S ERVICES
I NSTANCES

Enterprise or
Developer

50

50

50

Standard,
Workgroup, or
Express

16

16

16

Based on business requirements, you need to determine the number of instances that must
be installed on a database server. You can use several instances to isolate databases on a single
server. Doing so safeguards the databases from inadvertent configuration changes. However,
each instance has certain resource requirements, and running too many instances increases the
management overhead of the operating system.
The number of instances you can install always depends on the resources available on your
server and the resources that each instance requires. Sometimes its possible to sum the individual resource requirements for CPU, memory, and I/O, and get a reasonably good idea of
how many instances can fit.
Usually, if you have enough memory and disk space with SQL Server, you can get about four
instances comfortablymaybe one or two more, if theyre low-power-consumption instances.
Add many more than that, and you can run into disk trouble.
Generally speaking, one SQL Server instance will outperform two or more instances on the
same hardware, because there is some overhead for the instances themselves. If your first
instance isnt hitting a performance bottleneck, having a second instance always reduces the
resources available to both instances because the second instance maintains both the second
copy of SQL Server and its own copies of the query plans for its data.
Your obvious goal is to find a way to achieve a balance between isolation, manageability, and
resources.

Deciding How to Name Instances


THE BOTTOM LINE

As with all objects, establish a naming convention that makes sense in your environment.
Because of the large number of instances you can potentially have across an enterprise,
establishing a naming convention at the outset is good practice. Each instance must have a
unique name. The names should be short but descriptive. Pay careful attention to creating
naming conventions, avoiding cryptic names as much as possible. If the instances arent
named clearly, you may make mistakes when accessing them. Remember that an instance
cant be renamedonce a name is assigned, thats it.

48 | Lesson 2

Keep in mind the following caveats and requirements when youre creating a SQL Server
instance name:
Instance names are limited to 16 characters.
Instance names are case insensitive.
An instance cant be renamed. If you change the name of the computer, that portion of
the name changes, but not the instance name.
Instance names cant contain Default, MSSQLServer, or other reserved keywords.
The first character in the instance name must be a letter or an underscore ( _ ).
Subsequent characters can be from other national scripts, the dollar sign ( $ ), or an
underscore ( _ ).
Embedded spaces or other special characters arent allowed in instance names, nor are
the backslash ( \ ), comma ( , ), colon ( : ), semicolon ( ; ), single quote ( ), ampersand
( & ), or at sign ( @ ).

TAKE NOTE

Only characters that are valid in the current Microsoft Windows code page can be used
in SQL Server instance names. Also it is probably a good practice to not use complicated
instance names.
The name you give an instance is a virtual name. When creating directories and files, SQL
Server Setup uses the instance ID it generates for each server component. The server components in SQL Server 2005 are the Database Engine, Analysis Services, and Reporting Services.
The instance ID is in the format MSSQL.n, where n is the ordinal number of the component being installed. The instance ID is used in the file directory and the registry root. For
instance, if you install SQL Server and include Analysis and Reporting Services, the instance
ID will be three different numbers, and each server component will have its own instance ID.
The first instance ID generated is MSSQL.1; ID numbers are incremented for additional
instances as MSSQL.2, MSSQL.3, and so on. To confuse things a little further, if gaps occur
in the ID sequence because youve uninstalled a component or an entire instance, subsequent
installs result in SQL Server generating ID numbers to fill the gaps first. Hence, the most
recently installed instance may not always have the highest instance ID number.

TAKE NOTE

CERTIFICATION READY?
Expect some exam
questions combining
RAID, filegroups, and
possibly multiple
instances. Can a new
instance be created using
a RAID 5 array already in
use by another instance?
If so, where should the
data files and log files be
located if multiple drive
letters and arrays are
available?

SQL Server 2008 identifies instances slightly differently. The default path name is still
Program Files\Microsoft SQL Server but then deviates from SQL Server 2005 in that
the component is identified (e.g., MSAS10 for Analysis Services, MSRS10 for Reporting
Services, and MSSQL10 for the OLTP Database Engine).
Server components are installed in directories with the format <instanceID>\<component
name>. For example, a default or named instance with the Database Engine, Analysis
Services, and Reporting Services has the following default directories:
<Program Files>\Microsoft SQL Server\MSSQL.1\MSSQL\ for the Database Engine
<Program Files>\Microsoft SQL Server\MSSQL.2\OLAP\ for Analysis Services
<Program Files>\Microsoft SQL Server\MSSQL.3\RS\ for Reporting Services
Instead of <Program Files>\Microsoft SQL Server, a <custom path> is used if the user chooses
to change the default installation directory.
SQL Server 2005 Integration Services, Notification Services, and client components arent
instance aware and, therefore, arent assigned an instance ID. Non-instance-aware components
are installed to the same directory by default: <system drive>:\Program Files\Microsoft SQL
Server\90\. Changing the installation path for one shared component also changes it for the
other shared components. Subsequent installations install non-instance-aware components to
the same directory as the original installation.

Designing Physical Storage | 49

Deciding How Many Physical Servers Are Needed


THE BOTTOM LINE

Only so many instances can be supported on a single server. If you need more instances,
you must procure more servers. This involves hardware, software, and licenses.
Determining how many physical servers youll need and determining how many databases
you should create both depend on the same factors. The total number of databases or
instances on a particular server isnt all that relevant. What is important is how busy each of
the databases is (and, to a certain degree, the size of the databases in relation to the size of the
available disk space). You can have servers with only one very busy database, and other servers
with many, many databases (all little used). The same logic applies to instances.
You must consider the total overall load on each physical SQL Server, not the total number
of databases on each server (unless database size is an issue). As you saw in Lesson 1, System
Monitor can be used to help you determine whether a particular SQL Server currently is
experiencing bottlenecks.
If youll be setting up one or more new SQL Servers, determining how many databases
should be on each server isnt an easy task, because you probably dont know what the load on
each database will be. In this case, you must make educated guesses about database usage to
best distribute databases among multiple SQL Servers and get the biggest performance benefits. And once you get some experience with the databases in production, then you can move
them around as appropriate to balance the load.

Deciding Where to Place System Databases for each Instance


When an instance of SQL Server 2005 is installed, Setup creates the database and log files
shown in Table 2-3.
During SQL Server 2005 installation, Setup automatically creates an independent set of
system databases for each instance. Each instance receives a different default directory to hold
the files for the databases created in the instance. The default location of the database and log
files is Program Files\Microsoft SQL Server\Mssql.n\MSSQL\Data, where n is the ordinal
number of the SQL Server instance.
Table 2-3
Database and log files

TAKE NOTE

As pointed out in
the Deciding How to
Name Instances section,
SQL Server 2008 has a
slightly different path
structure.

D ATABASE
master

D ATABASE F ILE
Master.mdf

L OG F ILE
Mastlog.ldf

model

Model.mdf

Modellog.ldf

msdb

Msdbdata.mdf

Msdblog.ldf

tempdb

Tempdb.mdf

Templog.ldf

The SQL Server installation process prompts you to select the physical location of the files
belonging to the system databases if you want to use a location rather than the default.
System databases contain information used by SQL Server to operate. You then create user
databases, which can contain any information you need to collect. You can use SQL Query
Analyzer to query any of your SQL databases, including the system and sample databases.

50 | Lesson 2

Table 2-4 describes the type of information stored in each of the default databases.
Table 2-4
System database contents

TAKE NOTE

D ATABASE
distribution

C ONTENTS
History information about replication. SQL Server creates this database on your server only if you configure replication.

master

Information about the operation of SQL Server, including user


accounts, other SQL servers, environment variables, error messages,
databases, storage space allocated to databases, and the tapes and
disk drives on the SQL Server.

model

A template for creating new databases. SQL Server automatically


copies the objects in this database to each new database you create.

msdb

Information about all scheduled jobs, alerts, and operators on your


server. This information is used by the SQL Server Agent service.

tempdb

Temporary information and intermediate result sets. This database is


like a scratchpad for SQL Server.

An additional system database, the Resource database, is a read-only database that contains all the system objects included with SQL Server. Its usually hidden in Management
Studio. The only supported user action is to move the Resource database to the same location as the master database.
Normally, youll leave the system databases in the default installation directory. However, you
may have to move a system database in the following situations:
Failure recovery (For example, the database is in suspect mode or has shut down because
of a hardware failure)
Planned relocation
Relocation for scheduled disk maintenance

TAKE NOTE

Common files used by all instances on a single computer are installed in the folder systemdrive:\Program Files\Microsoft SQL Server\90, where systemdrive is the drive letter where
components are installed. Normally this is drive C:.

LAB EXERCISE

Perform the exercise in your lab


manual.

In Exercise 2.3, youll see where the system database files are located.

Deciding on the Tempdb Database Physical Storage


THE BOTTOM LINE

The faster the drive assigned to tempdb the faster your database will be. A separate spindle
may be justified.
The tempdb system database is a global resource available to all users connected to the instance
of SQL Server. The tempdb system database is like a scratchpad for SQL Server and a place
where temporary information and intermediate result sets are stored. Earlier versions of SQL
Server made some use of the tempdb database, but SQL Server 2005 takes that a step further.

Designing Physical Storage | 51

The new version uses the tempdb database heavily to support features such as row versioning
for triggers, online indexing, Multiple Active Result Sets (MARS), and snapshot isolation.
Consequently, you must be careful when determining the size and location of tempdb. In
addition, you should ensure that each instance has adequate throughput to the disk volumes
on which tempdb is stored.
Because it serves the same role as the reams of notepaper on which a writer outlines ideas, the
tempdb database is volatile, and no effort is made to save it from session to session. Instead,
tempdb is re-created each time the instance of SQL Server is started, and the system always
starts with a clean copy of the database. Temporary tables and stored procedures are dropped
automatically on disconnect, and no connections are active when the system is shut down.
For that reason, SQL Server doesnt allow backup and restore operations on the tempdb
system database.

TAKE NOTE

Because tempdb is re-created each time the instance of SQL Server is started, you dont
have to physically move the data and log files. The files are created in the new location
when the service is restarted. Until then, tempdb continues to use the data and log files in
the existing location.
The size and physical placement of the tempdb system database can affect the performance of
a system. For example, if the size that is defined for tempdb is too small, part of the systemprocessing load may be taken up with autogrowing tempdb to the size required to support the
workload every time you restart the instance of SQL Server. You can avoid this overhead by
increasing the size of the tempdb database and log file.
Determining the appropriate size and location for tempdb in a production environment
depends on many factors. As described previously, these factors include the existing workload
and the SQL Server components and other features that are used.
Whenever possible, place the tempdb system database on a fast I/O subsystem. Use disk striping if there are many directly attached disks. You should also put the tempdb database on
disks other than those being used for the user databases.
For optimal tempdb performance, you can make some critical settings to the configuration
of tempdb. (SQL Server Books Online contains excellent information on how to optimize
tempdb usage that is beyond the scope of this book.)
Set the recovery model of tempdb to Simple to automatically reclaim log space. This
keeps space requirements small.
Set files to automatically grow when they need additional space. This allows the file to
grow until the disk is full.
Set the file growth increment to a reasonable level. You want to keep the tempdb database files from growing by too small a value, causing tempdb to constantly use resources
to expand, which adversely impacts performance. Microsoft recommends the following
general guidelines for setting the file-growth increment for tempdb files.

TEMPDB

F ILE S IZE
0 to 100 MB

F ILE -G ROWTH I NCREMENT


10 MB

100 to 200 MB

20 MB

200 MB or more

10%

Preallocate space for all tempdb files by setting the file size to a value large enough to
accommodate the typical workload in the environment.

52 | Lesson 2

Create as many files as needed to maximize disk bandwidth. As a general guideline,


create one data file for each CPU on the server and then adjust the number of files up or
down as necessary. Note that a dual-core CPU is considered to be two CPUs.
When using multiple data files, make each file the same size to allow for optimal
proportional-fill performance.

LAB EXERCISE

Perform the exercise in your lab


manual.

In Exercise 2.4, youll modify the tempdb databases size and growth parameters.

Establishing Service Requirements


Every SQL Server instance is made up of a distinct set of services with specific settings
for collations and other options. The directory structure, registry structure, and service
names all reflect the specific instance ID of the SQL Server instance created during SQL
Server Setup.
You can determine which services are required by an instance based on the role played by the
instance. You can use the SQL Server Configuration Manager and the SQL Server Surface
Area Configuration tool to configure services.
You should establish policies for enabling and disabling SQL Server services. SQL Server
provides a large number of services. However, a database server typically doesnt require all
these services. Therefore, you should establish policies for enabling and disabling SQL Server
services.
To do so, first group database servers according to their roles in the infrastructure. Then,
identify which SQL Server services need to be enabled for each group, and disable the
remaining services for that group. For example, a reporting server requires SQL Server
Reporting Services (SSRS) but may not require SQL Server Integration Services (SSIS) or the
SQL Browser service.
Isolating services reduces the risk that one compromised service could be used to compromise
others. To isolate services, use the following guidelines:
Dont install SQL Server on a domain controller.
Run separate SQL Server services under separate Windows accounts.
In a multitier environment, run Web logic and business logic on separate computers

Specifying Instance Configurations


So far in this lesson, youve looked at a number of overlapping topics that focus on the best
ways to configure your database servers physical storage and other resource utilization.
In addition to planning where to place files and databases, designing storage and determining
instances, there are a few other aspects of instance configuration to address. SQL Server
2005 contains a new tool, Configuration Manager, which you can use to manage the services
associated with a SQL Server instance, configure the network protocols by SQL Server,
and manage the network connectivity configuration from SQL Server client computers.
SQL Server Configuration Manager is a Microsoft Management Console snap-in that is
available from the Start menu, or you can add it to any other Microsoft Management Console
display. Microsoft Management Console (mmc.exe) uses the SQLServerManager.msc file in
the Windows System32 folder to open SQL Server Configuration Manager.

TAKE NOTE

SQL Server Configuration Manager combines the functionality of the following SQL
Server 2000 tools: Server Network Utility, Client Network Utility, and Service Manager.

Designing Physical Storage | 53

You can use Configuration Manager to perform the following tasks:


Manage services. You can use Configuration Manager to start, pause, resume, or stop services, to view service properties, or to change service properties. As you can see in Figure 2-3,
Configuration Manager gives you easy access to SQL Server Services.
Change the accounts used by services. You should always use SQL Server tools, such
as SQL Server Configuration Manager, to change the account used by the SQL Server or
SQL Server Agent services, or to change the password for the account. You can also use
Configuration Manager to set permissions in the Windows Registry so that the new account
can read the SQL Server settings.
Manage server network and client protocols. SQL Server 2005 supports Shared Memory,
TCP/IP, Named Pipes, and VIA protocols. You can use Configuration Manager to configure server and client network protocols and connectivity options. After the correct protocols
are enabled using the Surface Area Configuration tool, you usually dont need to change the
server network connections. However, you can use SQL Server Configuration Manager if you
need to reconfigure the server connections so that SQL Server listens on a particular network
protocol, port, or pipe.
Assign TCP ports to instances. If instances must listen through TCP ports, you should
explicitly assign private port numbers. Otherwise, the port numbers are dynamically assigned.
You can use the SQL Server Configuration Manager to assign port numbers. Although you
can change port numbers that are dynamically assigned, client applications that are set up to
use these port numbers may be adversely affected.

TAKE NOTE

When youre assigning ports, make sure they dont conflict with port numbers that are
already reserved by software vendors. To determine which port numbers are available,
visit the Internet Assigned Numbers Authority (IANA) Web site at the following URL:
www.iana.org/assignments/port-numbers.

Figure 2-3
SQL Server Configuration
Manager is the preferred tool
to manage many aspects of
SQL Server instance configurations, including services.

LAB EXERCISE

Perform the exercise in your lab


manual.

In Exercise 2.5, youll learn how to use the Configuration Manager.

54 | Lesson 2

S K I L L S U M M A RY
Physical storage is a prime consideration when youre planning a SQL Server database
infrastructure. In this lesson, weve reviewed best practices and parameters for storing
the transaction log and backup file. Youve had a brief introduction to RAID and how it can
be used to both assure fault tolerance and optimize your storage system.
Placement of files also plays an important role on performance, and youve learned the
information youll need to use in deciding where to install the operating system, SQL Server
service executables, files created for databases, and system databases, especially tempdb.
Youve also learned that there is no magic answer about where to place files, but that your
decision in this regard will have a ripple effect across your database server.
You learned about default and named instances and how they both expand your flexibility
and ability to customize your database infrastructure while bringing along their own set of
considerations. Youve learned basic functions such as deciding on the number of instances
and naming conventions. Youve also learned when and where to use default and named
instances.
You learned how to set service requirements specific to your database server needs. Finally, you
learned how to use SQL Server Configuration Manager to administer services and instances, as
well as network protocols.
In the next lesson, youll learn how to combine the material youve learned here and in
Lesson 1 to develop a database-consolidation strategy.
For the certification examination:
Be familiar with transaction logs and their storage needs. Its important that you know the
growth characteristics of transaction log files and how they impact your physical storage
design. Make sure you understand the effect truncation and shrinking have on transaction logs.
Know the different types of RAID. Its important that you be able to differentiate between the
different types of RAID and understand which type should be applied in what circumstances.
You should be aware of the relative impact each RAID type has on read-and-write operations.
Be familiar with system databases. What is their role? What do they do, and how do you differentiate among them? Make sure you have a clear idea of the effect placement of the system
databases has on overall performance and the circumstances in which they should be moved.

Understand the impact of the tempdb system database. Be certain you know the role of
the tempdb system database in an instance and the design considerations surrounding its
physical storage. Be familiar with the recommendations for the initial size of tempdb in
different situations as well the growth increment.

Understand default and named instances. You should know the basic difference between
them and the circumstances under which a named instance is more appropriate than a
default instance, and vice versa.

Know how to administer instances. You should understand the basics of choosing the
proper naming and number of instances for your infrastructure.

Designing Physical Storage | 55

Knowledge Assessment
Case Study
Mullen Enterprises
Mullen Enterprises provides database-hosting services for companies in the health
care industry. The company is now offering a new hosting service based on SQL
Server 2005.
Mullen Enterprises has a single office. Customers connect to the company network
through private WAN connections and via the Internet.

Planned Changes
The company plans to implements new SQL Server 2005 computers named Dublin,
Shannon, and Cork to host customer databases.

Existing Data Environment


Currently, Mullen Enterprises has its own database named Customers, which is used
to track customers. This database exists on a SQL Server computer named Dublin.
Internal users access this database through a web services application that allows users
to provide details that are used to build ad hoc queries that are then sent to the SQL
Server computer.

Business Requirements
Each customer can host up to five databases. Databases for a given customer are always
hosted on the same server. Each customer uses his or her own naming schema. Because
all customers are in the health care industry, most customers give their databases similar
names such as Patients, Doctors, Medications, and so on.

Performance
The company wants to maintain a minimal number of SQL Server 2005 instances and
servers.

Multiple Choice
Circle the letter or letters that correspond to the best answer or answers.
Use the information in the previous case study to answer the following questions.
1. You need to design a strategy for identifying the number of instances that any one SQL
Server computer will support. What should you do?
a. Specify that each server must have one service for each customer.
b. Specify that each server must have only one instance.
c. Specify that each server must have one instance for each database that is hosted on
the server.
d. Specify that each server must have one instance for each customer who has one or
more databases that are hosted on the server.
2. You plan to have the Cork server contain three customers: Yanni HealthCare Services,
Kelly Hospitals, Inc., and The Curtin Clinic. Following your guidelines that each server
must have one instance for each customer who has one or more databases hosted on the
server, which of the following should you do?
a. Create a default and two named instances. Place the customers with the largest
performance need on the default instances, and place each of the other customers
on their own named instance.

56 | Lesson 2

b. Create three named instancesYanni_HealthCare_Services, Kelly_Hospitals, and


Curtin_ Clinicand place each customers databases on the specific instance.
c. Create three named instancesYHCS, KH, and CCand place each customers
databases on the specific instance.
d. Create three named instancesYanniHealth, KellyHosp, and CurtinClinicand
place each customers databases on the specific instance.
3. You are planning the configuration of the SQL Server 2005 instance where the
Customer database will be stored. As a security precaution, you need to ensure that
Windows services that are not essential are disabled. Which Windows service or services
should be disabled? (Choose all that apply.)
a. SQL Browser
b. SQL Server
c. SQL Server Analysis Services
d. SQL Writer
e. SQL Server Integration Services
f. SQL Server Agent
4. You are planning the configuration of the CurtinClinic instance and trying to determine
how much space you need for the database. You want to estimate the size of the Patients
database when it is completely full. The Patients database consists of the following tables:
Name, Billing, and Orders. The total field size for the Name table is 184 bytes; the
Billing table is 313 bytes, and the Orders table is 439 bytes. You need to plan for 15,000
records. How much space should you plan for the Patients database?
a. 15.36 MB
b. 14.60 MB
c. 14.28 MB
d. 17.23 MB
5. You want to know the amount of space the transaction log for the Customer database is
using. Which T-SQL command would you use?
a. DBCC SQLPERF (LOGSPACE)
b. DBCC CALCULATE (LOGSPACE)
c. DBCC SQLPERF (TRANSACT)
d. DBCC CALL (TRANSLOGSPACE)
6. Yanni HealthCare Services advises you that their database, Orders, is a mission-critical
database. Because the database, which is related to patient care, contains all the pharmacy, laboratory, and other orders, none of which can be acted on unless confirmed by
the database, it must be available 24/7 and have the fastest possible access. Which step
should you take?
a. Place the Orders database on a SCSI disk with the fastest controller.
b. Implement RAID 0.
c. Implement RAID 10.
d. Implement RAID 15.
7. You configure the Kelly Hospitals transaction log so that it starts with a size of 20 MB.
What settings do you need to configure in order for it to grow automatically by a prespecified amount of 5 MB until it fills the disk? (Choose all that apply.)
a. FILESIZE
b. FILEGROWTH
c. GROWTH_INCREMENT
d. GROWTH_SPACE
8. The transaction log for the Yanni HealthCare Services database has reached the maximum size possible and consumed all available disk space. You want to reduce the physical size of the log in order to allow other disk operations. Which of the following is the
correct procedure?
a. Use the command DBCC SHRINKFILE.
b. Truncate the database.

Designing Physical Storage | 57

c. Shrink the database from Object Explorer.


d. Force a checkpoint.
9. You have been instructed to optimize tempdb performance for all databases as part of your
optimization plan. Which of the following settings would you make? (Choose all that apply.)
a. Set the recovery model of tempdb to Full.
b. Preallocate space for all tempdb files.
c. Set the tempdb file-growth increments to least 10 percent.
d. Always confine tempdb to a single file.
10. Which of the following tasks can you use the Configuration Monitor to perform?
(Choose all that apply.)
a. Manage services
b. Change accounts used by services
c. Manage server network and client protocols
d. Assign TCP ports to instances

Designing a
Consolidation Strategy

LESSON

L E S S O N S K I L L M AT R I X
TECHNOLOGY
SKILL

EXAM
OBJECTIVE

Design a database consolidation strategy.

Foundational

Gather information to analyze the dispersed environment.

Foundational

Identify potential consolidation problems.

Foundational

Create a specification to consolidate SQL Server databases.

Foundational

Design a database migration plan for the consolidated environment.

Foundational

Test existing applications against the consolidated environment.

Foundational

KEY TERMS
deploying: Migrating and
stabilizing your database servers
in the consolidated environment.
developing: Designing a
database migration plan for
the consolidated environment,

creating a solution, and testing


the pilot.
envisioning: Gathering information
to analyze a dispersed
environment and identifying
potential consolidation problems.

planning: Evaluating the data you


gathered in the previous phase
and creating a specification to
consolidate SQL Server instances.

SQL Server is a powerful data platform capable of handling many different applications
at once. However, in most cases, each application has its own dedicated SQL Server that
is underutilized. This results in greater costs than necessary to many businesses, usually
for two reasons. First, often the DBAs dont want to chance decreased performance with
multiple applications using the same database server, so they separate each one on its
own SQL Server. Second, application developers or network administrators dont realize
that the database isnt synonymous with the server, so they require a new server for each
application.
Building on the capabilities of SQL Server 2000 and greatly extending its limits, SQL
Server 2005 is a platform geared to consolidate many SQL Server 2000 instances into
fewer SQL Server 2005 servers. This Lesson looks at the terminology and pros and cons
of consolidations, and how you may want to proceed when developing a strategy for your
environment.
This Lesson looks at a consolidation strategy in four phases based on the Microsoft Solutions
Framework (MSF):
58

Designing a Consolidation Strategy | 59

Envisioning your strategy. Gathering information to analyze a dispersed environment


and identifying potential consolidation problems.
Planning your strategy. Evaluating the data you gathered in the previous phase and
creating a specification to consolidate SQL Server instances.
Developing your plan. Designing a database migration plan for the consolidated
environment, creating a solution, and testing the pilot.
Deploying your plan. Migrating and stabilizing your database servers in the
consolidated environment.

The full MSF Process Model consists of five distinct phases:

TAKE NOTE

Envisioning
Planning
Developing
Stabilizing
Deploying
The stabilizing phase has been omitted in this discussion because it is largely an action
rather than a planning phase. During the stabilizing phase, the team performs integration,
load, and beta testing on the solution. In addition, the team tests the deployment scenarios
for the solution. The team focuses on identifying, prioritizing, and resolving issues so that
the solution can be prepared for release. During this phase, the solution progresses from
the state of all features being complete as defined in the functional specification for this
version to the state of meeting the defined quality levels. In addition, the solution is ready
for deployment to the business.

One note before going further: Your consolidation plan will be unique because of the wide
variety of variables that occur with each set of servers. This Lessons discussion assumes you
understand your environment and can extrapolate the advice and details given to make the
most informed decisions for your company.

Phase 1: Envisioning
THE BOTTOM LINE

When youre creating a consolidation plan, the first step is to consider its value by examining
the current SQL Server environment and gathering the information about the infrastructure.

The following sections highlight the main steps of envisioning the consolidation plan.

Forming a Team
Before you decide whether consolidation is a good idea for your organization, you need
to form a team to plan, create, and test your consolidation strategy. This team needs to
be involved from the start, so members have input on the benefits and drawbacks of
consolidation for the company.
The consolidation effort wont be quick or easy, and its too difficult for one person to
manage. Even in a small company where a single IT employee may be in charge of the
consolidation effort, this person will still need input and assistance from other aspects of the

60 | Lesson 3

business. A team of two is required at a minimum, consisting of a technical IT representative


and a business end user, although often other people will be involved.
The technical portion of the team should have representatives from the operational side of the
company as well as the development side. Again, in a small company these may be the same person, but both perspectives should be represented. From the business side for example, it might
be helpful to have a financial representative as well as an end user. This may be the same person,
but they should present the reasons for proceeding as well as the potential effect on end users.

Making the Decision to Consolidate


The first task when beginning a consolidation plan is to decide if you should consolidate your servers. Once this decision is made for two or more instances, then you
can proceed with your plan. The first step is to examine the reasons for and against
consolidation.

It isnt always a clear-cut case that youll want to consolidate your SQL Server instances.
However, some good reasons exist for going through a server consolidation. The following
sections cover the main reasons you should consider consolidation.

CONSIDERING COSTS
The first reason companies look at server consolidation is cost. You should consider a few
types of costs, but first look at hard coststhose you must pay for immediately, such as the
SQL Server license. Each SQL Server you install costs money for the server software. There
may be a simple fee for the software or there may be per-processor costs and possibly longterm maintenance contracts. Microsoft offers discounts for business categories (government,
education, many others) for which you will undoubtedly qualify but this remains a significant
cost for many businesses.
Hard costs are easy to quantify and calculate because they represent actual money being spent
by the company. Other costs, called soft costs, are harder to list because they consist of missed
chances for revenue or savings.
IT people are often paid salaries; or, if your company outsources your IT support, there may
be a fixed cost for the service. But every server you add requires time to set up, install, administer, patch, and so on. Soft costs can be hidden because companies dont raise the IT administrators salary each time a new server is added. Instead, the greater workload takes away from
the administrators ability to perform other work because of a lack of time. Or it may cause a
lack of desire to improve other areas because administrators feel overloaded and taken advantage of as the number of servers they must support grows.
A great example is the database administrators (DBA) position. If a companys DBA has two
SQL Server instances to administer, time should be available after monitoring logs, patching,
and so on, to tune these servers, proactively rewrite queries, and perform other tasks that are
important to a smooth running SQL Server. However, if the same DBA is required to handle
five servers, then there is less time to devote to each server. For a small business, the DBA is
probably also the system administrator. Overload is much more likely because that person
may also be responsible for file servers, mail servers, web servers, and other systems. Salary
costs are the biggest component of IT, so it behooves a company to minimize staffing requirements. Consolidation helps by allowing a smaller number of employees to administer a larger
number of applications using fewer servers.
The cost strategy can also be extended to your infrastructure. Although its unlikely that
youll run out of IP addresses, you may have other issues. These days, as servers get smaller
and smaller (e.g., blade servers), electrical power becomes a concern. Ensuring that you have
enough electrical power to supply your servers has become more of an issue in many data
centers. Even when youre part of a small company with a single rack at a co-location facility
or in the back closet, if you continue to add servers, at some point youll start to run short of

Designing a Consolidation Strategy | 61

electrical power. Adding power lines can be a small expense or a large one, depending on your
situation, but its never a request that you should make. If you begin to consolidate your servers as you upgrade existing servers or look to install new applications, you can dramatically
lower your power requirements.
An even more critical component than power in many environments is the cooling capacity in your data center. Most data centers designed in the past 10 years were created with the
expectation that 20 amps of power and 5 to 10 servers would be placed in each rack. Today,
with smaller servers, a single rack can draw more than 40 amps and contain dozens of servers,
throwing off more heat than can be removed by existing cooling systems. Large installations
are turning to liquid cooling in some cases, and rack vendors are even building liquid cooling
into their rack enclosures.
The addition of cooling capacity is a double request to a companys finances. Not only must additional cooling equipment be purchased and installed (an expensive proposition), but this equipment also requires power, adding strain to your power infrastructure. And if youre forced to move
to liquid cooling from air cooling, then a major capital investment is required. Again, consolidating servers can eliminate the need for your organization to invest in additional cooling systems.
These last two expenses, power and cooling, are soft costs. Its unlikely that adding one additional SQL Server will force you to spend money, but at some point the company will need
to expend hard cash on one of these projects. Consolidation allows you to delay, or even
eliminate, the need for any of these soft costs to become hard costs.

CONSIDERING SECURITY
One very good reason for consolidation is security. Setting up and maintaining a secure enterprise is difficult, and the fewer systems you have to secure, the more secure the enterprise
should be. Security experts talk about the surface area of attack, or the number of points at
which your security can be breached. Each new system means another chance for an attacker
to exploit a forgotten configuration, an unpatched vulnerability, or extra accounts that were
created and forgotten.
If you have one SQL Server, then you have one sa login account to worry about. When you
change the password, as you often should, then youve increased your level of security. If you
have five SQL Servers, there is a chance that one password will be forgotten or not changed
when it should be. You also have five potential sa login accounts that an attacker can look to
exploit. Just as having five doors to your building is less secure than having one or two, more
servers lower your overall security by increasing the surface area for attack. They also increase
the chances that one will remain unpatched or be incorrectly configured.

CONSIDERING CENTRALIZED MANAGEMENT


Another good reason to consolidate servers is to move to a centralized server management,
which can reduce costs and provide greater security. By centralizing management, you more
efficiently use the staff and software resources youve devoted to this task. Each server that
must be managed has its own quirks that must be known to keep it running in peak condition; this places an increased load on the staff and decreases the number of server instances
that a particular DBA can effectively manage. By consolidating on fewer servers, your staff
can more effectively manage the systems.
Also, some software resources are devoted to the management of the systems, whether these
are log readers, performance-monitoring applications, or others that read information from
each server instance. The licensing cost savings of these applications are easy to quantify in a
consolidation plan, but the load is also a driver. Network loads can be greatly reduced because
fewer servers being queried results in less information that must transit the network. In addition, the overall effectiveness of the applications can increase as fewer stresses are placed on the
applications. As loads grow, the ability of software monitoring to keep up with the number
of nodes can be compromised, and alerts or information may be lost.
Finally, one important benefit of centralized management is standardization. By reducing the
staff and resources that must be involved, its much more likely that your standards in every

62 | Lesson 3

area, including setup, installation, maintenance plans, naming, and more, will be carried out
consistently. Each new server and person to be handled by the enterprise increases the chance
that standards wont be met properly. The deviations may be deliberate or accidental, but each
one is a potential problem area. Moving to fewer consolidated environments allows fewer
resources to be devoted to managing these systems and increases the likelihood that each
resource better understands the systems.

CONSIDERING INCREASED RETURN ON INVESTMENT


Most of the previous reasons factor into a return on investment (ROI) calculation that your
companys finance people can determine when looking at a consolidation project. Because
consolidation can initially result in an investment in hardware as well as potential downtime,
the total ROI needs to be examined over time to determine whether it makes business sense.
In many companies, hardware isnt the largest systems cost, but each purchase is highly visible.
Hard dollars (or whichever currency you use) are the easiest costs to include in an ROI
study, and theyre also the first hurdle you must overcome when gaining approval for a consolidation effort. If your plan cant account for the investment in new hardware, its unlikely
to be approved. Once you can prove this is a worthwhile investment, you can begin to examine other, less budgetary, reasons for proceeding.
There are a few non-cost-related reasons to consolidate your systems where possible. The
first one to consider is the reduction in salary cost if you have fewer systems to administer.
Maintaining fewer Windows systems also means the IT staff must do less patching and less
monitoring work. This reduces the effort required to run your enterprise, which means your
people are less stressed. This translates into less downtime for your applications and more
availability for your business users to generate revenue.
Assuming you dont overwork your employees in some other way, then consolidating your
systems, if done correctly, should result in happier employees, which translates to less turnover
and better morale. Each person working in your company has some knowledge about your
company that is hard to replace. The less often youre forced to hire replacements, the more
efficiently your company will run, and the lower costs will be over time.

CONSIDERING OPTIMIZED USE OF RESOURCES


In many organizations, the servers in use are underutilized. This is partially a result of the fear
of systems being overloaded and partially because not enough time has been spent planning
system deployment. These two reasons are often intertwined, although theyre unavoidable in
some cases. Less frequently, the applications are underutilized, at least compared to the expectations at the time it was deployed.
The fear of overloaded systems comes from historical deployments of client-server and webbased applications whose usage was incorrectly forecast. Sometimes this happened because
there was not enough data to correctly forecast usage; in other cases, the impact of the applications was underestimated. Once an application is deployed and proves to be valuable, usage
often increases dramatically, overloading systems.
As a result, many applications are built with the supporting servers sized for a worst-case scenario, resulting in servers that trundle along at 5 to 10 percent utilization for most of their
major components (CPU, RAM, disk) and occasionally spike in response to some event. There
often is a good case for consolidating some of these servers to more efficiently use resources at
a greater level. Moving to 50 or 60 percent usage on one system by consolidating five or six
others is a more efficient use of resources, because the remaining four or five can be redeployed
or retired. As long as the end users understand that its acceptable for application performance
to suffer when resource utilization spikes, this is a good reason to consolidate the servers.
Resources are also underutilized when time isnt spent planning the deployments and new system sizes and when existing systems arent examined for possible consolidation at deployment
time. The most cost-effective time to handle SQL Server consolidations is prior to applications being deployed, because much of the work being examined in this Lesson is performed
on development systems and is duplicated in a later consolidation effort.

Designing a Consolidation Strategy | 63

TAKE NOTE

You can always look to separate your SQL Server databases on separate servers if performance problems become severe. For some reason, this is an easier decision to make in a
business than the later consolidation decision.

DECIDING NOT TO CONSOLIDATE


Youve seen a number of reasons why consolidation makes sense. Everything comes down to a
cost of some sort, but the reasons not to consolidate are more intangible. From a strictly financial point of view, wherever possible, a company should seek to consolidate its servers. However,
here some reasons why consolidation may not make sense, few of which are cost related.
Working backward from the reasons to consolidate, first examine security. Although the previous
arguments assume consolidation increases your security, consolidation is a double-edged sword.
If your SQL Server security is breached, then all of your databases are vulnerable. Although
overall security is higher, youve greatly increased the potential losses if a breach occurs.
Consider a company with five SQL Server instances, four used for internal applications and
one supporting a public Web site. Suppose you consolidate all five applications onto one
database server. Now youre supporting the four internal applications along with the public
web application on one SQL Server. This web application presents a larger attack surface,
because more people have the ability to launch attacks through the Web. If the security of this
server was to be compromised and an attacker gained control, they would have access to all
the financial and sales information in addition to the Web site data. In this example, consider
excluding the web SQL Server in the consolidation plan.
It isnt only outside attackers you have to worry about, however, because internal users can
inadvertently cause problems. Suppose a company develops its own inventory application. It
uses a database called Inventory_TDB to test its code and make changes while the production system runs against the Inventory_PDB database. There are two possible problems if
you consolidate these databases on the same server: A developer could accidentally test code
against Inventory_PDB, resulting in data-integrity problems or data losses; or a worker could
accidentally update data in the Inventory_TDB database, resulting in incorrect values for the
business. As you review consolidation strategies, consider how to mitigate your own issues.
Another reason consolidation may not be a good idea is the effect on performance. Each
application running on instances of SQL Server uses a certain amount of resources, memory,
CPU cycles, disk space, and more in performing its function. If you move all your applications onto one server, then the applications may compete for limited resources in a way that
negatively affects the applications. You can mediate some of this impact by properly sizing a
consolidated server, but this may be a reason that you dont want to consolidate servers.
The performance impact can be within the SQL Server instances as well. One of the single
points of contention on a SQL Server is tempdb. Some applications make heavy use of
tempdb, and some use it relatively little, but on a single SQL Server instance, all the databases
share one tempdb. As you add databases to a SQL Server through consolidation, this can
become a huge bottleneck.
One last issue that may give you pause in considering consolidation is the impact on employees if you experience performance problems. Happy IT staff members can be quickly pushed
to their limits by performance problems resulting from consolidation. On-call pages and long
hours trying to rewrite applications, tune queries, and so on, can devastate morale and lead to
employee turnover.
Business workers can quickly become frustrated by application performance problems and
decide that spending a large portion of their time at the computer, as opposed to performing
their job, isnt worth the aggravation. Losing one of your talented, senior employees could be
devastating to your business and overwhelm any cost savings from a consolidation of servers.
Although you may not become aware of employee issues until a consolidation plan is
complete and it may be too late to undo the process, its something to consider before you

64 | Lesson 3

embark on a plan. Employees are often a companys most valuable resource, especially in the
line of business.
Cost is usually a driving factor in deciding to consolidate servers, but sometimes the cost
savings doesnt outweigh the cost outlay. Because a consolidation effort usually requires new
equipment to be purchased, the cost of a new server may not be worth the investment.
Suppose a company sized a new eight-CPU processor for its consolidation efforts that costs
$50,000. This cost might not be worth the investment to embark on this strategy. This is
often apparent when a single server becomes disk constrained. Although todays disk capacities are growing, there is still a limit to how much space a single server can support through
direct attached storage. If you exceed this capacity and are forced to consider Storage Area
Network (SAN) based solutions, the initial investment can be high. It may be high enough
that you determine a consolidation strategy isnt worth pursuing.
Another area that works against consolidation is the risk factor. This factor encompasses cost,
performance, and staff. If you consolidate your servers onto one Windows machine, youre
in essence putting all your eggs in one basket. If a problem occurs with that one machine
overheated CPU, power supply failure, and so onthen all your applications fail. For some
companies, this is a huge problem. Suppose a company does a brisk business on its Web site
for one of its products. If a problem occurs in the Accounting application, the Web is currently unaffected and the financial department works from paper until the system is fixed.
However, if you put both of these databases on the same server, then an issue that occurs
from an Accounting system upgrade could potentially take down the Web site. Some companies consider this an unacceptable risk, so consolidation wouldnt be a possibility.
If you recognize these risks, you can mitigate many of them by implementing high-availability
features like clustering and redundant hardware. However, these features usually dramatically
raise the cost of the solution in two ways. First are the hardware and software costs from your
vendors for the resources to support these solutions. The other cost is in staffing, because you
may need to pay for training existing employees or hire others to handle these more complex
solutions. Just as the cost of purchasing a larger server can outweigh the benefits, so the cost
to mitigate risks can work against a consolidation strategy.
The last factor you should consider in reasons not to consolidate is the sunk cost factor.
Your existing servers, whether purchased or leased, have a cost already associated with them
that you may not be able to recoup. In some cases, you can trade in old servers on a lease or
sell them back to the vendor. However, if you cant recoup any costs, chances are that these
servers are older and may not be suitable for redeployment in your enterprise. In that case,
the accountants may not see any cost benefits in moving to new servers when the old ones are
paid for and unused. Be aware of this potential roadblock when designing your strategy.

DEVELOPING THE GOALS


A consolidation effort should have clear-cut goals prior to beginning any detailed planning.
The team members should initially have a list of goals they seek to accomplish through this
process both from the business and technical viewpoints. Some of these reasons will be dismissed in this phase, and others may become apparent; but without an overall philosophy and
direction, this effort wont proceed smoothly. Table 3-1 lists some sample goals.
Table 3-1
Goals for a consolidation
project

T YPE

OF

R EASON

GOAL

Technical

Reduce the number of systems that must be monitored by DBAs.

Technical

Increase security by combining logins on multiple servers through


consolidation.

Business

Reduce the licensing costs for SQL Server.

Business

Adhere to single-source vendor limitations for the project.

Designing a Consolidation Strategy | 65

The goals you develop should include the type of consolidation youll undertake. You could
consolidate the resources of your SQL Server instances in a variety of ways, and you could
choose to implement any or all of the following types of consolidation:
Instance consolidation. In this case, you look to reduce the number of instances by
moving databases from disparate instances to a single instance of SQL Server.
Physical server consolidation. In this type of consolidation, separate physical SQL
Server instances on different Windows servers are consolidated to one Windows server.
This could involve moving to multiple instances of SQL Server on one Windows server,
or possibly keeping the same number of instances. This could also mean running separate installations on one server by creating multiple virtual servers. The goal is to
reduce both the number of physical servers and Windowsservers that must be managed.
Geographic consolidation. Although this type of consolidation involves moving servers
from one physical location to another, it often includes one of the two previous types of
consolidation. More than any other type of consolidation, this has a larger impact on the
network, so carefully consider that in your plans.
Storage consolidation. Less a SQL Server move than a Windows consolidation, this
involves moving multiple SQL Server instances (and their corresponding Windows
servers) to the same storage device, such as a SAN device. It could also be a consolidation
of your SQL Server database files from multiple drives to fewer. Either option would
require less work by itself than the other methods, but it could be a part of a larger
consolidation project.
In developing the goals for the project, consider the existing environments that the applications run under. For example, many applications have a service-level agreement (SLA) with
end users that cant be easily altered. These should be listed as goals for those systems to
which they apply.
Many other items apply to your systems, such as changes to headcount, budget restrictions
for new hardware, technology changes, and more. As planning is not edition specific, sample
issues to consider are available in the Planning for Consolidation with Microsoft SQL Server
2000 white paper, available at
http://www.microsoft.com/technet/prodtechnol/sql/2000/plan/sql2kcon.mspx.

Developing Guidelines for the Consolidation Project


Each project needs guidelines that determine how a consolidation effort will proceed.
Some of these guidelines will be technical and others nontechnical, but they will affect
the way the plan is developed. These rules govern how youll move through the project. Although youll begin to develop them in this phase, they will carry through all the
phases as lessons are learned in the process for your organization. Some sample guidelines you may wish to use are as follows:

On-Line Transaction Processing (OLTP) and On-Line Analytical Processing (OLAP)


workloads wont be placed on the same server.
Multiple instances can be used where database naming conflicts or login conflicts occur.
Consolidated servers will run only SQL Server, not other services such as file serving,
Exchange, and so on.
These guidelines will be unique to your environment and need to meet the needs of your
company. The specific guidelines you develop will often be driven by business goals or
requirements.

66 | Lesson 3

Examining Your Environment


In developing the overall parameters for consolidation, one important set of information
is the structure of the current environment. Initially, every system should be considered;
only after you have valid reasons for dismissing it should the system be removed from the
master list. The next few sections examine how the current instances can be analyzed.

SQL Server 2005 is a great platform that has greatly enhanced its database performance over
SQL Server 2000. Youll most likely want to acquire new hardware for your consolidated
server, but not necessarily. In either case, SQL Server 2005 outperforms SQL Server 2000 on
the same hardware, assuming the minimum hardware and software requirements are met.
CERTIFICATION READY?
You set a performance
goal of user response
to data requests at
8 seconds or less. What
factors lengthen or shorten
this user experience?

At each stage, as you gather information about your current environment, note any potential
problem areas. These could be organizational issues, such as the scheduling of downtime, or
they could be technical issues, such as object name collisions. This phase is concerned with
the accumulation of documentation on your environment, not the decisions of whether each
individual change is possible.

ANALYZING APPLICATIONS
The first step in considering consolidation is to examine your applications and determine
which ones will run on either SQL Server 2005 or SQL Server 2008. If a software vendor
wont support your application on SQL Server 2005 or Server 2008, then you should eliminate that application from your plan. SQL Server 2005 should be completely backward compatible with SQL Server 2000 databases, and you can set the compatibility level at 8 (for SQL
Server 2000 as opposed to 10 for SQL Server 2008, 9 for SQL Server 2005, or 7 for SQL
Server 7). However, some vendors wont provide support for this configuration. Make sure
you check, because running applications without support is a good way to irreparably damage
your reputation.
While youre checking this, note that there is no longer a SQL Server 6.5 compatibility
level. If youre still running version 6.5, which is no longer supported by Microsoft, then
you should find out if you can upgrade the application. Substantial keyword and structural
changes occurred between versions 6.5 and 7 and your application may not even upgrade
without a rewrite.
After youre sure all your applications will be supported, you should list the potential applications and their database names and servers. Doing so will give you a master list from which to
start considering consolidations. Take this list, and look for any database name collisions: two
applications that use the same database name. Consider development and Quality Assurance
(QA) databases. Because each SQL Server instance requires unique database names, if you
find two applications that share the same database name, you need to note that fact. They
cant run in the same SQL Server instance, although they could inhabit two separate instances
on the same Windows server. You should also determine whether either applications database
name could be changed.
Dont cross off any names at this point, but knowing how many collisions exist will give you a
minimum number of servers or instances. Suppose your company had three Sales databases
one each for development, QA, and production systemsand the names couldnt be changed.
You would need at least three servers or three instances in a consolidated environment.
The next step is to examine each application and determine whether any of them are mission critical or too risky to combine with other systems. Doing so will further increase your
minimum server number because you may not be able to combine some databases with others.
Be careful, and ensure that you solicit feedback from the business owners of systems. You
may not think the Sales system needs to be separate from the Accounting application, but the
Finance department may feel differently. You can provide them with the benefits and the reasons

Designing a Consolidation Strategy | 67

you think this combination will work, but make sure you consider the viewpoints of the
various departments.
While you do this, dont mention the multi-instance nature of SQL Server as a way of combining different SQL Server instances onto one Windows server. This is a technical distinction that few business users will understand. Treat instances as if they were separate servers,
and find out what the various stakeholders think about their application being combined with
others. You can make the multi-instance decision later.
Also weigh the security risks and get your security department involved (if you have one).
Many view the development systems as less critical than, say, the Accounting system. However,
that doesnt mean you can put the Development database on the same server as the Web site
backend. Security is a constant process, not a single event, and this is a good place to consider
the implications of combining databases on one server.
Save examining performance implications or hardware requirements for later; the previous
items can quickly lower the number of applications that you must examine in detail. Because
time is precious and the performance and hardware analysis will require more time, you dont
want to examine any more systems than you have to.
Also consider the Service Pack/Patch implications of combining servers. Because many patches apply to an entire SQL Server, or to all instances on a server, you need to be aware that if
you patch one application, youre potentially patching them allor breaking them all, if the
patch changes functionality. Include the third-party vendors past patch response times and
the Service Pack certifications on their applications. Also note whether servers are currently at
different patch levels, and be sure each application is tested at the highest patch level that will
exist on a consolidated server.

MONITORING APPLICATIONS
Once you have a list of applications and their databases that are potentially available for consolidation, you need to begin looking at their performance requirements as well as detailing
information. Because applications that are combined on one server or one instance may affect
each other, you need to examine a few performance points with each application/database
combination.

TAKE NOTE

You should have current baselines for all your servers that you can use in this section. If
you dont, consider gathering these on a regular basis.

You should have up-to-date documentation on the configuration of each server regarding
memory, security, server and database settings, collations, and any other changes made to
a default installation. The disk usage for the server, local or SAN based, as well as network
requirements should be included.

TAKE NOTE

When gathering disk requirements, account for disk space used by files outside SQL
Server. This includes backup files, data-transfer or bulk load files, Data Transformation
Services (DTS) files, and more, that take up space but arent usually associated with
SQL Server.
When youre gathering security requirements, its important to consider whether the
SQL Server instances are all in the same domain. Large enterprises sometimes have more
than one Active Directory (AD) domain. The security changes can be challenging if you
attempt to consolidate SQL Server instances from two different domains.

68 | Lesson 3

TAKE NOTE

Each version of SQL


Server has utilized the
server resources more
efficiently. This means
that for a given hardware platform, a SQL
Server 2008 instance
will run more quickly
than a SQL Server 2005
instance and a SQL
Server 2005 instance
will run more quickly
than a SQL Server 2000
instance. Note the SQL
Server version for each
consolidation candidate.

You may want to start by gathering quick averages of performance times for queries on the
different servers. You should examine a representative sample of queries using Profiler or
another monitoring tool and gather data on the time and frequency of queries. This is less for
planning than for a contingency plan in case issues arise. Having this data will help you determine in more detail where issues are occurring. When you gather this data, use a few different
dates and times.
Before you begin, start a spreadsheet on which to record the data. Doing so will help you
tabulate and compare your data. List the applications down the left side; next to each, include
the current server, database name, and database size. Use a consistent notation for size (probably gigabytes). You should also note the CPU type and speed, the RAM, and the total disk
space for each server available for the SQL Server instances. At the top of each column, record
a header that notes the values youre placing below it. You may want to check whether SQL
dynamically manages memory or if there is a limit. Record the average that SQL uses as well
as the Windows machine total.
On each SQL Server, you need to examine some counters and determine how much load
each application places on its SQL Server. Gather the following counters in Performance
Monitor to get an idea of each servers usage. Each is discussed in the appropriate section,
and Lesson 1 contains more information about how to gather this information.

MONITORING THE CPU


The CPU is the heart of the system, and a faster CPU often means better performance for
your server. To prepare for a consolidation effort, you need an idea of the load from each
server to understand whether you can combine two servers. Use the following counters:
Processor: % Processor Time. This counter helps you understand how busy the overall
server is for a dedicated SQL Server server on a Windows server.
Process: SQL Server process: % Processor Time. This counter is necessary when you
examine servers that have other applications, including other SQL Server instances
running on one Windows server.

MONITORING MEMORY
Memory is critical in SQL Servers processing of requests, and more is always better. Note the
amount of physical memory on the server; then, to properly size a server holding more than
one application, monitor the following counters:
Memory: Available Bytes. This is a good general counter that gives you an idea of how
much memory pressure the server is experiencing. If this is less than 100 MB, the server
is starting to feel pressure.
Process: Private Bytes: sqlservr process. This should be close to the size of Process:
Working Set: sqlservr process if there is no memory pressure from the Windows server.
Memory: Paging File: %Usage. If this value is high, then you may need to increase the
size of the paging file or account for a larger file on the new server.
SQL Server: Buffer Manager: Buffer Cache Hit Ratio. If this is less than 80 percent
on a regular basis, not enough memory has been allocated to this instance.
SQL Server: Buffer Manager: Stolen Pages and Reserved Pages. The sum of these two
values divided by 100 should tell you how much to set in MB.
SQL Server: Memory Manager: Total Server Memory (KB) and Target Server Memory.
The first counter shows how much memory the server is consuming, and the second is the
amount it would like to consume. These values should be close to one another.
These metrics tell you how much memory is being used by the processes inside SQL Server.
Because SQL Server is fairly memory hungry, these will be padded if each application has its
own SQL Server, but you can still use the information to make some guesses about a consolidated server.

Designing a Consolidation Strategy | 69

Its important to check whether Address Windowing Extensions (AWE) or Physical Address
Extensions (PAE) is being used. If these switches are being used, then you should be especially
careful when consolidating other applications using the same switches onto a server without
large amounts of memory. These extension settings only apply to 32-bit versions of Windows
Server. Large memory usage with these two extensions may push you to investigate 64-bit
versions of Windows Server and SQL Server. Note that the counters for performance monitor dont include AWE values. Youll need to investigate inside SQL Server for more accurate
information. You can get detailed instructions for checking memory in the Troubleshooting
Performance Problems in SQL Server 2005 white paper available at
www.microsoft.com/technet/prodtechnol/sql/2005/tsprfprb.mspx.

MONITORING DISK PERFORMANCE


In gathering disk performance information, consider all the disks your application uses.
Because disk subsystems vary tremendously in their makeup, underlying structures, and individual component performance, consider these general guidelines. A server that gets reported
as overloaded based on these guidelines may be performing well at the SQL Server level and
vice versa. Consider the perceived performance at the SQL Server level, meaning application
and query responsiveness, along with the data from performance counters when you determine whether additional databases can be added or to design a new storage subsystem.
Examine the following counters. Also examine all logical disks in use by your SQL Server
instances, whether theyre local storage or SAN based:
% disk time. Examine this value for each disk. If its regularly greater than 50 percent,
then youre probably experiencing bottlenecks from the disk subsystem. New databases
shouldnt be added to disks that are experiencing high disk-time usage. Instead, note
the structure of the logical drive, and factor that information into designing a new
subsystem.
Average Disk Queue Length. This value should be 0 ideally; if its sustained at 2 or
more on any of your logical drives, then you may have an issue and should consider a
new subsystem. This is a rule of thumb; newer subsystems may be capable of adequately
handling this number of requests, but its a point of concern.
Avg. Disk Read/sec and Avg. Disk Write/sec. These counters give you an idea of how
quickly youre moving data to and from the disk. Logs should be reading low, such as
2 to 4 ms, and your data files should be less than 20 ms on average. Again, these values
combined with the performance of the SQL Server should tell you what the load is like
on this particular subsystem.

TAKE NOTE

If youre measuring the performance of a RAID set of disks, you need to perform additional
calculations on the values from Performance Monitor. Check one of the white papers on
performance for more details.

All your SQL Server databases should use disk subsystems that include some sort of RAID
protection. But its something you need to note when gathering information about your
servers. Different RAID levels, as well as different numbers of disks in a RAID array, will
affect the performance of your new consolidated drive. Note how many spindles are being
used in addition to the RAID level of each drive being used for database files. This will
help you determine whether your existing subsystem or new design is adequate for the
expected load.

REF

Determining RAID levels


is discussed in Lesson 10.

One interesting note on disk performance is that the percentage of disk usage can affect the
performance of your systems. Studies done on disk access times show that performance is
highest when the utilization of the disk is less than 50 percent. After this point, the heads
must wait longer to access data, because more of the data is written toward the outside of
the disk.

70 | Lesson 3

In addition to checking the performance of the subsystem, ensure that you have adequate
space. In consolidating servers, be sure to account for the expected growth of the underlying
systems. Running out of space a few weeks or months after consolidation will likely upset a
great many people, not the least of which is the group that must fund additional space. Make
sure that any new disks on which databases are to be moved can handle the expected growth
for at least six months and preferably a year.
System databases also can experience growth, and that should figure into your calculations.
The master database is fairly small and should remain so in almost all cases, but msdb stores a
few pieces of information that can add to storage requirements. With multiserver administration features enabled, msdb could grow unexpectedlybe sure to note any issues at this stage
in your documentation.
Of more concern than msdb is tempdb because its more likely to have larger storage requirements. Intermediate worktables and other structures are stored in tempdb and can cause
growth in this database that exceeds the size of user databases in some cases. Factor the
tempdb usage on all servers being consolidated, and expect that you may wish to size a new
tempdb using the sum of the other maximum usages. The tempdb database is often placed
on its own disk array, so your calculations for this disk subsystem should ensure a fast and
responsive array for this database as well as adequate space. Expect that the combined load of
multiple systems will be larger on a consolidated tempdb, because this is a shared resource on
each instance.

TAKE NOTE

One way to mitigate tempdb contention in consolidation is to use multiple instances of SQL
Server on one Windows server. Each will have its own tempdb, which means less contention
on one database. The downside is that you may need more disk subsystems to ensure separation. Also balance the gains in tempdb separation with the fixed-memory setups common with
multiple instances.

Consider the space requirements of your data and log backups, as well. Many SQL Server
instances store these files on a separate drive from the data and log files, so make sure that
theyre noted in your documentation.

MONITORING OTHER SQL SERVERSPECIFIC METRICS


In addition to the gross metrics for the server, you should monitor some specific SQL Server
metrics:
SQL Server: General Statistics: User Connections. In a per-server-based licensing
environment, knowing the number of users can affect the costs. This is also valuable
because each connection uses a small amount of memory.
SQL Server: Cache Manager: Cache Hit Ratio. This value provides a metric of how
often the server finds the data it needs in cache. In general, the server should find data
more than 90 percent of the time. If you have two instances with numbers below 80
percent, you may not want to consolidate them unless you retune the applications or
add substantially more RAM.
SQL Server: Databases: Transactions/sec. The cost of a transaction varies widely
depending on the amount of work done inside the transaction. However, over time,
this metric will tell you how busy your SQL Server is. As with the User Connections,
if you have two servers with high values, you may not wish to consolidate them.
Alternatively, you may find multiple servers with very low rates and want to combine
them on one server.
SQL Server: Databases: Database Size: Tempdb. This will aid you in sizing a consolidated tempdb for multiple servers.

Designing a Consolidation Strategy | 71


LAB EXERCISE

Perform Exercise 3.1 in your lab


manual.

In Exercise 3.1, youll gather a number of performance-related metrics from one of your SQL
Server instances. Although the exercise will walk you through setting up a single monitoring
session, its recommended that you perform this multiple times at different times on different
days to get a picture of your SQL Server over time.

MONITORING GENERAL ISSUES


From your monitoring history, you should be able to generate average values as well as some
maximum values that give you a rough picture of the performance of your servers over time.
Record these next to each database in your spreadsheet. Use two columns for each value: one
for the average and one for the maximum. This will help you to easily see whether two of
your databases have potential problems with a maximum value. Dont just take the maximum
value from Performance Monitor because that can be misleading. One recording of 100 percent CPU can throw off your calculations when 99.9999 percent of the time the value is 10
percent. Most large ISPs use the 95th percentile method for recording maximum usage of
bandwidth: They throw out the top 5 percent of values and take the next max. If you recorded eighteen values of the CPU between 5 and 20, one value at 25 percent, and one at 100
percent, you would throw out the 100 percent recording and note the max as 25 percent.
This is a lot of work, but it isnt necessarily continuous. Set up your monitoring (you should
already be doing some of this), and gather data over a period of days or weeks. Monitor different days and times and include times when your application is generating a heavy load.
Youre recording data that will factor into your decision about which servers are consolidated
as well as the details of the planning process in the next phase.

CREATING SERVICE-LEVEL AGREEMENTS


All the performance data youve gathered is of a technical nature. However, other performance
requirements are less obvious. One of these is a SLA that an application may have with its
end users. Such agreements spell out uptime/downtime limitations, performance metrics to be
met, response times, and more. Theyre important business-level considerations that may drive
the decision to consolidate SQL Server instances.

TAKE NOTE

TAKE NOTE

SLAs arent always easy to find. Usually, these agreements are worked out between departments
in large companies, and the documentation may not be stored with the technical documentation on the server. With staff turnover, its possible that they may even be lost or unavailable,
existing only in the memory of long-term employees.

A good place to start looking for SLA documentation is with a business liaison for a particular
application or with an assistant close to your CIO/CTO. They often maintain this type of
business documentation.

CONSIDERING GEOGRAPHICAL ISSUES


As a company grows, it often acquires space in diverse locations. This can be a building next
door to the headquarters, in the next city as a remote office, or halfway around the world.
You may find that your SQL Server instances become similarly dispersed throughout the
enterprise; as you work on a consolidation, be conscious of which servers youre considering
moving to a new location. Just as servers in separate domains need to be handled differently,
servers that move locations need special consideration.
In most situations involving multiple physical locations, each location will probably be on
its own network subnet. Relocating a server from one subnet to another would likely involve
network addressing issues as well as potential client connection issues such as DNS and
connection string values.

72 | Lesson 3

Your network is a dynamic topology that responds and reacts to changes constantly. Although
switches have helped to smooth out local traffic, routers can seriously affect the performance
of an application if introduced into the flow of traffic. Moving a server to a new physical
location can cause stress on the network if the link back to the users is less capable or reliable
than the previous one. Consider the impact of any move on network traffic and addressing
and consult your network engineers.

TAKE NOTE

You dont have to move a server to a new building to have network issues in a consolidation
effort. Even moving your database to another server in the same rack could result in a subnet
change. Because the use of two or more subnets causes the introduction of a router, you may
have traffic problems, security problems, and so on. Consider the need to involve network personnel in any consolidation effort.

MONITORING ASSOCIATED SYSTEMS


Do you know what other servers are required for a particular application to function? Do
you know every other server that connects to each database? Chances are, no matter how well
you know your environment, there are some connectivity items youre unaware of. Or maybe
there are connections that only you know exist. These are places where your consolidation
effort can have problemsand you may not find out until youve moved your production
environment.
With web farms, load balancers, and other scale-out technologies, it can be difficult to keep
track of every system interaction, but you should document as much detail as possible. As an
example, suppose you have a reporting SQL Server that receives information from the web
SQL Server database using a SSIS package every night. During the consolidation effort, all
systems are tested and appear to work on a test version of the reporting SQL Server. When
this server is consolidated onto another, however, the job running from the web SQL Server
doesnt have rights to connect to the new server at the firewall level. This issue can be difficult
to track down if not documented, and many processes like this are poorly documented.

TAKE NOTE

As you document interactions and connections between servers, applications, and processes,
dont forget to document a connection on both servers. A connection between Server A and
Server B should be documented in two places: the Server A documentation and the Server B
documentation.

Included in the discussion of linked systems are two SQL Server topics: replication and linked
servers. These two technologies are often implemented for very different reasons, and combining two servers connected with either of these deserves consideration. Replication is often used
to copy data to another system for two reasons: offline access, which isnt a consideration here;
and reducing the load on a primary system and letting another server have this data available
to an application. If you combine two SQL Server instances that replicate data between themselves, you will probably be defeating the purpose of replication. Carefully examine the implication of such a move, and factor that into your decision to consolidate servers.
Linked servers, on the other hand, often are used to combine information from two servers
in an ad hoc manner. Linked-server queries are often slower than cross-database queries, and
you may be able to improve performance by eliminating the linked server if the two databases
using the link are combined. The downside is that programming changes will be necessary to
rewrite views, stored procedures, user-defined functions (UDFs), assemblies, and so on that
use the linked server. Consider all of this as a postconsolidation project to complete later.
One last interconnected set of systems you should consider is your administrative system
especially the backup system. If you have multiple backup systems (a tape drive on each server

Designing a Consolidation Strategy | 73

or even two larger consolidated systems), make sure your consolidation plan wont overload
one of them. Although the total amount of backup data wont change, the distribution across
systems could overwhelm one of them, especially if youre using local tape systems for your
SQL Server instances. The same is true of monitoring or other administrative software systems if you have multiple installations.

CONSIDERING OTHER ISSUES


Deciding in the first phase which servers to consolidate is a difficult process, and there are no
hard-and-fast rules that can simplify this process. You need to use your judgment and experience as a DBA to balance the trade-offs and make the best decision you can with your applications. All the topics in the first phase as well as the following items are designed to help
you understand what you need to consider and the effect each may have on your SQL Server
instances:
Shared resources. Many of the shared subsystems between instances have been
eliminated in SQL Server 2005. Full-Text Search is a big one, but there can still be other
dependencies between instances. Be sure these dont conflict with any of your decisions
made about which servers to consolidate.
Extended stored procedures (XPs). Because these procedures operate outside of SQL
Server, they can cause instability, especially if they arent extremely well written. Memory
leaks are a big cause of concern with custom extended stored procedures. Factor the use
of these by separate applications on a combined server. Be especially careful if you have
custom XPs that are upgraded or different versions on different servers.
By default, the ability to use some XPs is disabled in SQL Server 2005. You can use the
Surface Area Configuration Manager tool to configure these XPs and enable the use of
xp_cmdshell.

TAKE NOTE

SQL Server 2008 no longer includes the Surface Area Configuration Manager tool. That functionality is now accomplished using the SQL Server Configuration Manager. The functionality
has not changed, just the method of accessing it through the GUI tool.

Collation conflicts. This is a potential point of conflict if defaults are expected in an


application and a new server uses different defaults. With the granularity of collation in
SQL Server 2005 extending down to the individual field, this shouldnt be a problem,
but make sure to note potential conflicts and effect a mediation strategy.
At this point, you should have gathered a great deal of information about the structure
and configuration of the current environment. Your plan should identify candidates for
consolidation as well as list problem areas. The next phase will provide guidelines for
developing your plan.

Phase 2: Planning
THE BOTTOM LINE

The second phase of this process is the planning stage where the servers are designed and the
processes are initially built to move forward.

At this point, a team is working on this project, and the members have overall goals and
guidelines as well as a list of systems to consolidate. In this phase, you do the more detailed
work of determining the makeup of the consolidated environments by determining which
hardware will be used to run the consolidated servers. This is also when basic procedures and

74 | Lesson 3

processes are tested and developed for your environment. This is in contrast to the Envisioning
phase, where you analyze the benefits and costs of consolidation to the entire enterprise.
You begin with analysis, looking at the information from phase 1 and beginning to consider
how to design the new consolidated system. Note that this doesnt necessarily mean buying new hardware. Existing hardware can be used if it meets the requirements developed in
the design of the new server. The initial testing of procedures and processes occurs prior to
detailed development and final testing in the Developing phase.

Evaluating Your Data


As youre gathering all this data about performance, you should be simultaneously determining which databases can and cant coexist. If someone is concerned about Database A
being on the same server as Database B, youll have to make a judgment about whether
they can be in separate instances or must be on separate servers.

You should start to have general ideas of which applications can be combined on one server
based mostly on CPU load and disk space. Without adequate disk space available, either
direct attached or available on a SAN, you cant combine the applications. Many decisions
and trade-offs regarding hardware are affected by other factors discussed later. Check your
plan against all these sections and go back through it each time you make a change. This is a
complex process involving many intertwined factors, so a single pass wont be sufficient.
Consider the hardware systems separately and then in the context of the design possibilities.

EVALUATING YOUR PROCESSOR DATA


Try to keep CPU utilization below 70 percent. This gives room for spikes, although less room
than on separate servers. This decision results in a trade-off of performance capabilities, so
make sure the benefits outweigh the potential performance limitations. If youre moving to
like processors, then you can total the average loads and stop when you hit 70 percent. If
youre moving to more powerful processors with new hardware, then youll have to use benchmarks to make some guesses about the load on a new CPU.
In either case, ensure that you understand how much less CPU headroom you have, and
examine your CPU spikes carefully. If there are particular times when multiple applications
make heavy use of computing resources, such as end-of-month or end-of-year processing, you
may wish to have those applications use separate SQL Server instances.

EVALUATING YOUR MEMORY DATA


Decisions about RAM sizing can be more difficult. RAM is used heavily by SQL Server, even
on lightly loaded systems, so you should determine as much information as you can about
how your SQL Server instances are using RAM.
RAM usage is managed dynamically by the server. However, as your servers grow, you may
implement AWE and/or PAE and set memory usage. Running multiple instances usually
means you should set the memory usage as well, and your decisions change if you choose to
move to 64-bit hardware.
This means you need to consider larger memory requirements for combined servers, although
the extrapolation isnt linear. If you have three servers with 2 GB each, then consolidating all
three to one server doesnt mean you need 6 GB; 4 GB or even less may be feasible. However,
in general, you should determine how much memory each server is using and develop a minimum memory requirement for a combined server using this number along with the expected
workload of the combined server. For very lightly loaded servers, you may not need to add
memory; for larger workloads, you may wish to set a large minimum memory requirement.
RAM is now quite inexpensive so there is little cost to specifying more RAM than the minimum necessary.

Designing a Consolidation Strategy | 75

Different OS versions have different memory maximum limits. These maximums can affect
your decisions to implement new servers because there will be a software cost as well as a
hardware cost to change versions.
WARNING One important

thing to note on your spreadsheet


is whether any of your servers
have memory specifically set to
some value. Most SQL Server
instances are set to dynamically
manage memory, but some are
limited by configuration to a
maximum amount. If this is the
case, its possible that the SQL
Server database for this application
uses additional memory and may
try to use more when consolidated
to another server. Reconsider the
load of any servers whose memory
is limited, and allow additional
padding in your calculations.

WARNING Beware of

SAN-based storage if you havent


designed the setup. Sometimes
a Logical Unit Number (LUN)
presented to a server is made up
of physical drives shared by other
LUNs. Consult your SAN vendor
to see whether this is a potential
problem.

Your design for a new SQL Server that exceeds 2 GB of RAM should also include the
proper settings for AWE and PAE. You can read about these settings in the Windows Server
2003/2008 documentation or SQL Server Books Online.

PLANNING YOUR DISK SUBSYSTEM


The disk subsystem is critical to a smooth-running SQL Server because the data is stored on
disk, the log is written to disk, backup-and-restore speed is limited by disk response, and so
on. In addition, running out of RAM means a slower-running SQL Server; running out of
disk space means a stopped SQL Server.
In examining your subsystems, first make sure youre looking at the logical versus physical disk setup. They arent necessarily the same; you need to understand that performance is
driven by the physical capabilities, but it can be masked by the logical setup. In other words,
you may design a fast physical subsystem but then place two, three, or more logical drives on
this subsystem that contend for the physical access. A logical drive containing tempdb may
appear to be responding slowly, but if you determine that a heavily used transaction log on
another logical drive shares the same physical array, removing the bottleneck may be as simple
as separating one logical drive to another physical array.
Good practice suggests that each physical drive array contain its own logical drive and no
others. These days, with larger file systems capable of handling hundreds of gigabytes of
space, theres no reason to combine multiple drive letters onto one physical array. This will
keep confusion to a minimum as well as make it easy for you to monitor performance.
You need to consider three additional factors in your plan for disk subsystems: space for
backups, the RAID level of the underlying storage, and the number of disks that are used.
This is of less concern on a SAN, but with direct-attached storage, moving from RAID-1 to
RAID-5 can have a big impact on performance. In general, you should move to like storage.
You should also keep an equivalent or increase the number of disks (spindles) in use on a
particular logical drive.
The design of your new subsystems should take all these factors into consideration separate
from any existing hardware. Once youve made some decisions on your requirements, you
can examine the capabilities of existing hardware and determine whether any systems can be
reused or reworked to meet your requirements.

Making Initial Decisions about the Plan


At this point, you can start to cut and paste rows in your spreadsheet into groups, separating each group by a few rows. Each group represents a potential consolidated server,
so you can total up the CPU loads, disk space, and so on for that group. These are still
gross decisions, but this is a way to start formulating a plan that can be picked apart by
yourself and others and then refined until it becomes viable.

Once youve set up these groups, examine the memory usage and tempdb usage to refine
your plan. These two areas are hard to examine, and this is where the process becomes more
of an art than a science. One example is the Memory: Pages/sec counter, which is relative
between servers. Its hard to compare it across servers, but based on the amount of memory
on the server available to SQL Server and looking at similar levels of RAM, you should get an
idea how memory hungry your database is. This is deceiving when you look at raw numbers,
because SQL will cache as much as it can and take up more memory than it needs for small
applications. Try to limit the number of high-memory-usage databases on each server. SQL
Server 2005 memory usage is very different from SQL Server 2000.

76 | Lesson 3

The tempdb database is also a concern with consolidating servers. Because each instance
shares one tempdb, if you have two to three applications that make heavy use of tempdb,
then you can overwhelm a server and swamp the disks on which tempdb resides with requests
(or even cause tempdb to grow out of control). SQL Server 2005 offers some additional uses
for tempdb, such as row versioning and online index operations. Sizing the tempdb database
is an art, and familiarity with the behavior of tempdb over time on your SQL Server instances
helps tremendously. You can look at the sizes of tempdb over time and make some extrapolations regarding the needs of multiple databases on a consolidated SQL Server. Allow for some
padding, and size the new tempdb appropriately to handle the needs of all the applications
that will use it. This often means adding together the usages of all tempdb instances that are
being consolidated.
The following sections show a few of the potential options you can consider when designing a
new consolidated server environment.

RUNNING THE UPGRADE ADVISOR


You may determine that you can consolidate SQL Server 2000 instances using existing
hardware. Before you consider upgrading any server to SQL Server 2005, you should run the
Upgrade Advisor to ensure that the hardware can adequately handle the upgrade.
If youre purchasing new hardware, then make sure your hardware not only exceeds the recommended levels from Microsoft but also will handle the additional load of multiple databases
or instances.

CHOOSING TO USE MULTIPLE INSTANCES


One remedy for overwhelming tempdb is to use multiple instances of SQL Server on one
Windows server. If you decide to choose this option, then make sure you understand that
running multiple instances requires that you manually set the memory allocated to each
instance. Because your base Windows machine may be limited, make sure that you have
enough RAM to support both instances. If youre running with AWE or PAE support, youll
suffer some performance issues as memory is swapped in and out of the addressing window.
Moving to 64-bit Windows and SQL Server can alleviate this issue, and this is one of the big
reasons to move to 64-bit SQL Server 2005. However, you need to make sure your applications will run on 64-bit platforms.
The other consideration with multiple instances is whether the application will support
multi-instance connections. As mentioned earlier, connecting to a named instance of SQL
Server requires that you address the server as windows name\instance name. If youre considering multi-instances, you should check that your applications support it.

ADDRESSING 64-BIT SQL SERVER


Although 64-bit versions were available for SQL Server 2000, 64-bit computing wasnt considered mainstream and the installations were few and far between. With SQL Server 2005
and SQL Server 2008, new Intel 64-bit CPUs, and other 64-bit advances, 64-bit computing
is a much more viable option. The big push to 64-bit computing is to take advantage of the
huge memory spaceno more 4 GB limit. This is worth considering in a consolidated environment, but because 64-bit computing has a cost (new hardware, new Windows version,
training, and so on), you need to do a cost analysis before making this decision.
CONSIDERING HIGH AVAILABILITY
Another concern in consolidating a larger number of servers to a smaller number is the availability of each application. A short example will help to illustrate the issue.
Suppose youre working on a consolidation and have chosen to consolidate the QA server
along with the Inventory server because performance metrics show both of them to be a good
match for consolidation onto the existing Inventory server. You perform all the planning and
other phase work as outlined in this Lesson and then complete the deployment of both applications onto the Inventory server with no issues.

Designing a Consolidation Strategy | 77

A few months later, the QA team is testing a new version of one application and finds a bug.
The resolution from the vendor is a patch for SQL Server. When this patch is applied to
the QA server, it causes a problem and results in the server being rebuilt over the next day.
During this time, the Inventory application is unavailable, and the IT team must deal with
unhappy end users.

TAKE NOTE

Its unlikely that a nonproduction server would be consolidated with a production server,
but work performed on one application can affect another. In this case, a clustered situation might have allowed Inventory to run on the failover server while the primary was
being rebuilt.

This example illustrates how a consolidated server can pose a higher risk of instability than
separate servers. If you have two applications, each with a 20 percent chance of bringing
down a server, then combining them means you have a 40 percent chance of the new server
going down due to one of these applications.
Another example is the consolidation of the Accounting and SalesCRM systems onto one
server. In the event that a hardware failure occurs, two groups are unable to work, and a
greater portion of the business is affected than when the two applications were separate.
WARNING When looking at

high-availability solutions, consider


the training costs for your staff.
Clustering with SQL Server 2005
and Windows 2003 is much easier
than with past versions, but your
staff must have or acquire additional skills.

This brings us to the need for additional high-availability mitigation strategies for a consolidated server. Clustering, log shipping, database mirroring, and other high-availability technologies become more important (and possibly essential) in a consolidated environment. The
resulting cost of implementing them may outweigh the benefits of consolidating onto fewer
servers.

GOING THROUGH MULTIPLE ITERATIONS


Now that youve made some decisions about which databases can be combined and the details
of your hardware, check your plan to be sure your decisions are sound. Multiple individuals
should validate the plan, and you should be able to articulate the technical and business reasons behind your decisions to others.

Case Study: Consolidating and Clustering


Youre considering moving dkRanch Cabinets from its existing five servers down to three
servers: one for Accounting, Inventory, and SalesCRM; a second for WebPresence; and a
third for Development. However, the chief financial officer decides that having the three
major business applications on one server is a large risk to the business. You counter
with the idea that you can create a four-node clustered setup with three active nodes and
one passive node to mitigate the risk.
Its estimated that the five-year cost of keeping the existing environment and going
through scheduled hardware upgrades will be $22,000. Moving to new hardware now
for each server is $4,000, but the additional software for clustering will cost $16,000
under your business deal with Microsoft. Is this worthwhile?
Solution: The total cost of the clustered solution will require four servers at $4,000,
which is $16,000 plus the cost of the software. This would cost the company $32,000
in total, which is substantially more than the $22,000 that is expected to be spent on
the current environment.
There could be other considerationsmonitoring software costs, data-center costs, staffing costs, and morethat would provide $10,000 in savings and make this a more costeffective solution. But as the problem was outlined, with the additional risk of having
mission-critical business applications on the same server, it probably is not worthwhile.

78 | Lesson 3

This may require that you modify your plan and move databases around. As you move databases, make comments in your spreadsheet that gives reasons or restrictions for your decisions. You may, for example, note that the WebPresence database must remain on its own
server for security reasons. That way, as you move things, its easier to keep all the information
and rationales for your decisions in one place.

SIZING HARDWARE
At some point, you may be moving databases and realize that youre exceeding 70 percent
average CPU or discover that youll need more resources in some area. This is when you begin
sizing new hardware, as is usually the case with consolidation. You might consider consolidation when getting ready to purchase new servers, so this is a logical spot to make decisions.
For your CPUs, youll have to depend on benchmarks and educated guesses if youre choosing
CPUs of a type that you dont have in your environment. Consider the relative strengths in
benchmarks of the integer values, because databases primarily move data around (an integer
operation). You may wish to subtract 20 percent as a pad when sizing the CPUs. You may
consider dual-core CPUs as 1.8 or so single-core CPUs. Multiple CPUs and symmetric multiprocessing (SMP) systems should use a benchmark of each additional CPU counting for 75
percent of a full one, due to the overhead of running the SMP system. Consider the relative
performance of different SQL Server versions in your decision.
RAM requirements are almost always easy to decide on: Get as much as you can and then
add more. You cant have too much RAM in a SQL Server, so size as much as you can install
or afford on a new system. If youre moving to a clustered environment, especially an activeactive node system, allow enough RAM for a failover scenario.
Your disk space requirements will require that you go back to your current applications and
get some numbers for your database growth over time. Factor the space required for backups
into this growth number, and add a pad (such as 10 percent). Try to ensure that you have
enough space for a year when sizing disks for your databases. Because budgets are usually
annual, this works well if you need to purchase more disks later.

Planning to Migrate Applications


Once youve developed the hardware design and made some decisions about which
databases will be consolidated, youll begin developing more detailed plans for the
migration of the applications, as well as user accounts and databases.
This section discusses the processes involved in moving applications to the new server. This
includes client-level changes, possibly firewall or other network changes, monitoring software
changes, possible upgrades of application code, security changes, and so on. This is the time
when you may perform a migration on a smaller scale, note a problem, perform another
migration with a possible solution, note other problems, and then continue until youre comfortable with your process. Youll resolve issues with names and locations, connection issues,
and so on. Youll also determine the order in which you need to perform the various steps.
A few of the issues to be aware of and ideas for mitigating problems are as follows:
Migrating logins and users. Database users are often easily moved with the database
itself, especially if you use the backup-and-restore or detach/attach method, but logins
are more difficult. First, make sure you need to move logins, because you may have
duplicate logins already set up on servers being consolidated. If you need to move logins
and retain their passwords, use the sp_help_revlogin procedure as described in KB article
246133: HOW TO: Transfer Logins and Passwords Between Instances of SQL Server.
If you have collisions where the same user exists on different systems with different
passwords, note this fact and devise a mediation strategy. Be aware that Active Directory
users and passwords exist throughout a domain. There is no requirement to change such
users if a consolidation is within a single domain.

Designing a Consolidation Strategy | 79

CERTIFICATION READY?
Can two servers each
with a named instance of
Sales and each using a
database named SalesDB
be combined onto one
server with two instances
of Sales and SalesDB?
If this can be done, how
would users connect?

TAKE NOTE

Security issues. Your jobs, DTS packages, Integration Services packages, or CLR assemblies may require security permissions not set up on the consolidated server. This is the
time to resolve those issues through detailed examination of the logs created during your
testing. Hard-coded or preset passwords may also cause issues. Check connectivity from
all clients and other servers to resolve any issues.
Domain issues. If youre consolidating two servers from different Active Directory
domains, create procedures to either grant cross-domain permissions or otherwise handle
domain related issues. This also applies to different Active Directory forest situations.
Forests consist of domains and a multiple forest situation could be complex.
Migrating DTS packages to Integration Services. This is a more difficult task,
although you can download the DTS runtime, which enables you to run DTS packages
on a SQL Server 2005 server. Look up Upgrading or Migrating Data Transformation
Services in Books Online.
Moving the data is also something you test at this stage, but in limited amounts. You
shouldnt transfer a 100 GB database over and over to resolve issues. Instead, create a new
database and transfer a small subset of data to it. Then, perform your testing using this
smaller database. In some cases, this may be as simple as performing a database transfer using
Integration Services, backup and restore, or the detach/attach methods. In others cases, the
process may be much more complex, involving the development of customized Integration
Services scripts and packages.

You use actual data or a subset for this testing, but dont focus on specific strategies for one
application. You should be developing general procedures to combine multiple SQL Server
instances onto one instance or server. If you learn specific items for one application, make
a note of them separate from your general procedures.

As youre refining your process and procedure, youll note some issues that can affect the consolidation. These may include extended downtime, additional resources needed to complete
the consolidation, application changes, and so on. These risks should be noted in your plan
and used to determine whether you proceed to the next stage of developing the consolidated
solution.
Beware of scope creep at this point. There is always work going on with many SQL Server
databases, and youll be tempted to include some deployments of functionality along with the
consolidation because the application will be down. Avoid this temptation because there will
be issues associated with the consolidation and it will be difficult if not impossible to determine the source of the problems. Is it the consolidation or the enhancements? One example
is given in the following hands-on exercise, but there are many others. For example, pruning
logins, users, and other SQL Server objects should be tackled as a separate project, either
before or after this one.

Case Study: Avoiding Scope Creep


Youre developing a consolidation plan for dkRanch Cabinets and are looking to move
the Accounting database to the same server that currently houses the SalesCRM database. Both of these applications were purchased from a third party and set up by the
vendors. The standard setup from the vendor uses the sysadmin account to connect to
the SQL Server instance using a password stored in a configuration file on each client.
After contacting the vendor, you learn that there is no technical reason why the applications need to use the sysadmin account specifically, and you receive instructions for
using another account. In what order should you proceed with the consolidation of the
Accounting application? (Not all steps are required.)

80 | Lesson 3

Change the client configuration files to reflect the new account and password.
Change the client configuration files to reflect the consolidated server name.
Create the new account, and assign permissions on the existing server.
Create the new account, and assign permissions on the consolidated server.
Add the steps for changing the application account to your consolidation plan.
Add the Accounting application migration steps to your consolidation plan, ignoring
the account changes.
Test the application using the new account.
Solution: This is a bit of a trick question because the idea in a consolidation effort is
to avoid scope creep. The consolidation should focus strictly on moving the application
to a new server without changing its functionality. Moving the connectivity to a new
account is a major change of functionality and should be completed prior to the consolidation testing. This prevents scope creep by changing the application as a project prior
to considering it for consolidation. The steps should be as follows:
1. Create the new account, and assign permissions on the existing server.
2. Change the client configuration files to reflect the new account and password.
3. Test the application using the new account.
4. Add the Accounting application migration steps to your consolidation plan, ignoring
the account changes.
The steps specific to the account changes on the consolidated server are ignored because
once the application is changed to the new account, the steps involved in consolidation
would be the same as with any other accounts that currently exist on the Accounting
SQL Server.

Phase 3: Developing
THE BOTTOM LINE

The plan is complete. Begin implementing, prototyping, and testing.

Now that youre developing a plan, its time to begin real development of the consolidated
solution. This phase of your plan requires the actual hardware or its equivalent, so that you
can begin testing and refining the plans using full-scale prototypes of the databases and
servers. This also involves validation of the decisions, piloting the consolidation, and a reexamination of the plan to ensure that it works as expected.

Acquiring Your Hardware


In this phase, you need to acquire at least some of the hardware that you designed in
phase 2. A full-scale mock-up of at least one of the consolidated servers is necessary for
a full test of the application movement as well as a simulated load that this server will
undergo. Its understandable if you cant get exactly the same hardware due to financial
constraints, but you should strive to get a system as close as possible in order to accurately assess the performance of the system under real-world loads. If youve purchased
new hardware for the project at this time, you should be able to set it up as designed
and test your consolidation procedures as well as a full-scale load.

Designing a Consolidation Strategy | 81

There may be budget restrictions, but this is the last testing phase where you can alter the
hardware design before going live. This is why its important to get as close as possible to the
expected live setup and load to validate your decisions. Test various memory and disk configurations to determine whether there are ways to improve performance through reconfigurations. Once the system goes live, these changes will be difficult to make.

Creating the Proof of Concept


At this stage in the process, you should have acquired some of the hardware needed
and documented your plans and procedures for proceeding with the consolidation.
Prior to testing the process in a production environment, you should first implement a
proof-of-concept consolidation.

This stage should work exactly like the full consolidation, just on a smaller scale. Take one set
of databases and migrate them exactly as you plan to do with all your servers. This set should
result in a consolidated server that looks similar to one from your final design. You may combine two databases onto one serveror more, if that is what your plan calls forbut the final
server needs to look like its production design.
You should then use the consolidated server under the same load that it will experience in
a production environment. This can be accomplished through replaying traces or simulated
loads, but whatever method you use needs to be as close as possible to the real-world result
in order to verify that your processes will work as expected. Youre trying to ensure that the
extrapolations you made for the performance of the consolidated server are accurate in terms
of what the final server will experience.
In implementing this proof of concept, you must test all the scripts and steps as well as any
connections from other systems, monitoring and administrative changes, applications and tools,
and so on. This isnt just a SQL Server test, but an entire application environment evaluation,
with special attention paid to metrics (without neglecting the other parts of the system).
This is essentially a dry run of the processes to find and eliminate any problems. You may
require a second proof of concept if you have a large number of issues, or if you decide that
some things cant be fixed and you must mitigate the problem with other solutions. The
examination of this step is crucial to ensuring a successful production deployment.

Creating the Pilot


Once youre comfortable with the hardware and your proof of concept, its a sound
idea to set up a pilot of some less mission-critical applications and consolidate them. If
you have a large number of servers being consolidated onto a smaller number, you may
choose one of the less critical ones to work out any issues in your procedures. If youre
consolidating all your servers onto one large server, perhaps youll consider consolidating two applications as a pilot and add the others later based on the success of your pilot
consolidation.
This pilot should be a full-scale, live movement of the applications to the consolidated server.
In essence, its the first step of your final consolidation, but with the intention of learning
from this exercise before continuing on to the next server. This follows the same scope as the
proof of concept, with one notable exception: This isnt a test. This is a live deployment in
your production environment, and unless the pilot proceeds so badly that you must roll back
the servers to their original environment, the databases moved during this stage will remain
consolidated for the foreseeable future.

82 | Lesson 3

Even though this is a live deployment, you need to perform as much testing as you can and
pay increased attention to the consolidated server. This is the first production change youll
make, and the success or failure of this part of the project will affect your companys business.
Everything prior to this step was a test and didnt directly affect the end users. This time, any
mistakes will have a direct impact on the application as its used by the company.

REEXAMINING YOUR DESIGN


No matter how well youve planned and how accurate your design and procedures, its
extremely likely that some unforeseen event will occur or some hole will be found in your
procedures. After the pilot, reexamine your plan based on the results and any knowledge
youve gained in the process.
This is especially helpful if the pilot was a failure. Having to roll back your consolidation
effort in the pilot to the original servers doesnt mean you should not consolidate servers;
rather, you should determine why the pilot failed and work to correct the problems.
Procedures and processes are the most likely places of a problem, usually due to a setting
or configuration not being completed or correct. Examining issues here as compared to the
proof of concept will tell you a great deal about how well your testing procedures mimic the
production environments.
Its possible that youll learn your hardware design was insufficient to support the consolidated
server. This is a very bad situation if youve purchased hardware. Allow some pad in your
design for the system to be more stressed than you expect. You might not want to put
a 16-CPU server instead of an 8-CPU server as a pad, but you very well might design a
server that has 16 GB of RAM instead of 12 GB to allow for some headroom if youve
underestimated the resource needs of a consolidated server.
If you find that youre rewriting whole sections of your plan following the pilot, you didnt
spend enough time on phase 1 or 2 (or both) and should go back and rework those phases
using the knowledge youve gained. At this time, you should be tweaking your procedures
and plans only slightly: reordering steps, adding in a forgotten step, or perhaps deleting a
single item.

Phase 4: Deploying
THE BOTTOM LINE

The final phase of your consolidation effort is the deployment stage where you move the
servers onto a consolidated effort in line with your plan and stabilize the applications in a
live, production environment.

You now start deploying ! Your plan from the planning phase should have involved scheduling
that determines the order in which applications and databases are consolidated into their new
environment. There may be requirements to complete one consolidation so that hardware can
be freed up and reused in a later consolidation, or its possible that all your changes can be
done in parallel. Based on the experiences in phase 3 of your proof of concept and the pilot,
however, you may choose to reorder the moves to ensure as little disruption as possible to the
business.
Regardless of how well your testing has gone up to this stage, its highly recommended that
you dont perform all your consolidations at once, or even within a short period of time.
Other issues will arise in your organization that you must deal with in addition to any glitches in the consolidation. Allow time to work on issues without forcing large sections of your
schedule to be reworked.

Designing a Consolidation Strategy | 83

TAKE NOTE

Staffing can be an issue during a consolidation deployment. Unless you can afford additional consulting help, youll be asking the existing staff to work longer, usually late-night,
hours. Scheduling the consolidations too close together will run the risk of overworking
your staff and increasing the likelihood of mistakes. Allow time between the consolidations
to let your staff recover from the additional work.
Any change to a production environment runs the risk of destabilizing the applications on
which end users depend. This not only leads to less efficient use of the applications for the
business, but also frustrates the end users. There will be a point after which youve decided
the consolidation cant be rolled back to the original environment. At this point, you must
develop workarounds or reconfigurations, or implement contingency plans to stabilize the
environment. Some things you may have planned for and some may be completely unexpected, but in either case you must work to ensure that the nontechnical aspects of the consolidation arent forgotten. Let your end users know of your plans, show them youre working
quickly to stabilize things, and apologize for the inconveniences.

S K I L L S U M M A RY
Consolidation is a huge trade-off process between any number of conflicting requirements. It
may involve trading peak performance for efficiency, costs now for costs later, or some other
set of metrics. Whether this is the right decision for your company or situation is something
that must be examined on a case-by-case basis.
The focus of this Lesson was to provide an outline of how to proceed with a consolidation
analysis and deployment if you decide this is the right decision for you. Remember that the
decision to proceed as well as the process of planning, testing, and deployment is often an
iterative process that requires you to examine your decisions and reexamine them again and
again, considering all your options and their implications.
For the certification examination:

Know how to analyze a dispersed environment. Understand the steps involved in


analyzing an environment of SQL Server instances with an eye toward consolidation.

Understand why you consolidate in a single instance versus multiple instances.


Understand the reasons why multiple instances may be preferred over consolidation to a
single instance.

Know the issues to be aware of in a consolidation. Know a number of issues that impact a
consolidation project, both technical and nontechnical.

Knowledge Assessment
Case Study
DkRanch Cabinets
dkRanch Cabinets is a small company with 120 employees that builds custom kitchen
cabinets. The company currently has five applications that it uses to run the business,
each requiring a SQL Server.

Planned Changes
The company would like to consolidate down to two SQL Server database servers: one
for the internal applications and development and one for WebPresence. The company

84 | Lesson 3

wants to be sure it can perform the consolidation without purchasing new hardware
and still have a well-performing system.
The new consolidated servers will run SQL Server 2005.

Existing Data Environment


The applications are SalesCRM, Accounting, Inventory, WebPresence, and
Development. SalesCRM is used by the sales application to record sales of products.
The Accounting database handles all the financial applications for the company, and
Inventory is a less secure application used by the woodworkers to record the raw materials they receive along with the finished products that are produced. Development is
the place where the internal IT staff builds and tests the Inventory application before
deploying it to Inventory. WebPresence is the back end for the company Web site. Each
application requires a single database to support it.
The existing servers have baseline measurements as follows:

S ERVER

CPU B ASELINE

M EMORY B ASELINE

SalesCRM

16%

1.6 GB

Accounting

18%

800 MB

Inventory

29%

1.2 GB

WebPresence

32%

1.2 GB

Development

24%

1.6 GB

Existing Infrastructure
The current server setup is on Windows Server 2000, and the individual hardware is set
up as shown here:

CPU S

RAM

H ARD -D ISK
S PACE

D ATABASE
S IZE

E MPLOYEE
U SERS

2000 Standard

4 GB

80 GB (RAID 1, 2 drives)

20 GB

Accounting

2000 Standard

2 GB

120 GB (RAID 5, 4 drives)

60 GB

Inventory

2000 Standard

2 GB

240 GB (RAID 5, 7 drives)

40 GB

12

WebPresence

7 Standard (per
CPU)

8 GB

80 GB (RAID 1, 2 drives)

20 GB

2 + anonymous users

Development

2005 Standard

8 GB

240 GB (RAID 5, 7 drives)

12 GB

A PPLICATION

SQL
E DITION

SalesCRM

All the servers listed are of the same hardware model and have interchangeable CPUs,
RAM, and disks.
Each of these servers meets the recommended requirements for SQL Server 2005.

Business Requirements
The servers can be reconfigured to meet the necessary needs, and any leftover hardware
can be redeployed in other areas as other servers are needed elsewhere. If the consolidation isnt performed, other projects will be placed on hold due to budgetary reasons.

Designing a Consolidation Strategy | 85

Management would like to consolidate servers, but they want a valid business reason for
undertaking this project.
An existing project to upgrade the database servers to SQL Server 2005 was already
approved, with Enterprise Edition upgrades slated for the multiprocessor servers.
The companys current DBA is overloaded with tuning and managing the five servers,
and a second employee is being considered. If the consolidation is performed, this will
be unnecessary.
If a pilot application is to be included with Inventory, the Accounting application
should be used.
Management is behind the consolidation, but disruptions to the WebPresence and
Inventory applications must be minimized during the workweek.

Technical Requirements
The new servers should use no more than 70 percent CPU based on previous baselines.
Any servers that will be redeployed must have at least one CPU, 2 GB of RAM, and 40
GB of disk space. All current servers run RAID, and any redeployed servers must still have
at least two drives to run RAID 1. All the RAID cards can support either RAID 1 or 5.
For the purposes of estimating, each additional CPU above the first counts as one CPU
when calculating loads.
One spare eight-way server with 8 GB of RAM is being returned to the vendor, but its
available for the next two months for testing if needed.
All the applications are developed in-house and can be configured to connect to default
or named instances. The development procedures, however, call for the development
databases to be named the same as their production counterparts.

Multiple Choice
Circle the letter or letters that correspond to the best answer or answers.
Use the information in the previous case study to answer the following questions:
1. The current server baselines are listed in the case study. Do the CPU measurements
allow for consolidation to two servers?
a. Yes
b. no
2. The current server baselines are listed in the case study. Do the memory measurements
allow for consolidation to two servers?
a. Yes
b. No
3. Which of the following are valid reasons to proceed with the consolidation? (Choose all
that apply.)
a. Lower salary costs
b. Reduced power consumption
c. Standardized hardware
d. Lower upgrade costs
4. The development group has been planning to add an upgrade to the Inventory application to support new products. Because the application is rarely taken offline, they ask to
include this upgrade in the project plan for the consolidation. What should you do?
a. Include the change in your project plan.
b. Include the change in your pilot plan.
c. Do not include the change.

86 | Lesson 3

5. In what order should the following steps be performed for a successful consolidation?
a. Pilot the consolidation using Accounting.
b. Develop a process for migrating the logins from one server to another.
c. Examine the business ROI for performing a consolidation.
d. Form a team for the project.
e. Migrate the remaining applications and stabilize them on the new servers.
6. Testing of your processes on a full-scale load should be performed in which phase?
a. Planning
b. Envisioning
c. Development
d. Deploying
7. The development group has been planning to add an upgrade to the WebPresence application to support new products. Because the application is rarely taken offline, they ask
to include this upgrade in the project plan for the consolidation. What should you do?
a. Include the change in your project plan.
b. Include the change in your pilot plan.
c. Do not include the change.
8. You are proceeding with the consolidation project and need to determine how to set up
the new server. Which configuration should you use to minimize instances?
a. Four named instances, one for each application
b. Three named instances and one default instance, one for each application
c. One default instance and one named instance
d. One default instance
9. You have almost completed your consolidation plan. The Inventory, Accounting, and
SalesCRM applications have been migrated onto the new server. However, when you
add the Development instance, you experience some severe CPU load problems. What
should you do if this is performed late on a Sunday night?
a. Continue into the week and work out any problems.
b. Roll back the Development consolidation and retest your configuration.
10. You are developing an aggressive consolidation plan to complete all server moves in the
current quarter. July 4 falls on the Tuesday after your first planned server move. Four of
your six IT employees have scheduled vacation on that weekend. What should you do?
a. Move the project plan for the first server migration one week ahead to the end of June.
b. Proceed with the plan as scheduled, and ensure the other two employees are well
versed in the process.
c. Move the project plan for the first server migration back one week to the middle of July.

Analyzing and
Designing Security

L ESSON

L E S S O N S K I L L M AT R I X
TECHNOLOGY SKILL

70-443 EXAM OBJECTIVE

Analyze business requirements.

Foundational

Gather business and regulatory requirements.

Foundational

Decide how requirements will impact choices at various security levels.

Foundational

Evaluate costs and benefits of security choices.

Foundational

Decide on appropriate security recommendations.

Foundational

Inform business decision makers about security recommendations


and their impact.

Foundational

Incorporate feedback from business decision makers into a design.

Foundational

Integrate database security with enterprise-level authentication systems.

Foundational

Decide which authentication system to use.

Foundational

Design Active Directory organizational units (OUs) to implement serverlevel security policies.

Foundational

Ascertain the impact of authentication on a high-availability solution.

Foundational

Establish the consumption of enterprise authentication.

Foundational

Ascertain the impact of enterprise authentication on service uptime requirements.

Foundational

Modify the security design based on the impact of network security policies.

Foundational

Analyze the risk of attacks to the server environment and specify mitigations.

Foundational

KEY TERMS
active directory (AD): The operating systems directory service
that contains references to all
objects on the network. Examples
include printers, fax machines,
user names, user passwords,
domains, organizational units,
computers, etc.

87

audit: An independent verification


of truth.
organizational unit: An object
within Active Directory that
may contain other objects
such as other organizational
units (OUs), users, groups,
computers, etc.

security policy: The written


guidelines to be followed by all
employees of the enterprise to
protect data and resources from
unintended consequences. A
security policy, for example, should
exist guiding all users on how to
protect their network password.

88 | Lesson 4

In SQL Server 2000, some key security templates made security cumbersome and resulted
in workarounds that often didnt meet users requirements. As a result, one of the key
design considerations with SQL Server 2005 was an increased level of security for the
server. SQL Server now not only includes more control and capabilities but also makes it
easier for the DBA to administer the security policies for the server.

This Lesson will examine the methods and reasoning behind designing an effective databaselevel security policy for your SQL Server instances.

Gathering Your Security Requirements

THE BOTTOM LINE

Before you can develop an effective security policy, you must understand the requirements
that your plan must meet. These include requirements dictated by your business as well
as any regulatory requirements imposed on your business by governmental or regulatory
agencies. Your plan must cover both of these types, and you must resolve any conflicts
between the two based on your situation.

The requirements imposed on your SQL Servers by the business will in all likelihood be easier
to meet (in other words, they will be less restrictive) but will probably be harder to ascertain.
When someone in business decides on a requirement for an application, that requirement
may or may not be documented thoroughly, which can cause you difficulties during planning.
Youll spend much of this part of the design process interviewing executives, business liaisons,
stakeholders in each application, developers, administrators, and anyone else who may know
why an application has a security need.
The regulatory requirements, conversely, should be easy to determine. A business IT liaison
should be able to let you know which governmental regulations apply. Once you know the
applicable laws or codes, you can look them up from the appropriate agencys offices or Web
site and incorporate them into your documentation.

TAKE NOTE

WARNING Make sure you


know the exact details of the
requirementsand dont rely on a
summary from a source other than
the regulatory agency. A digest or
guideline from another source can
help you understand the rules, but
your security decisions must satisfy
the original requirements.

As you gather this information, document it carefully. You may want to segregate the
data by server instance and database for ease of locating it later. Youll use the various
requirements to design the security policy for your SQL Server.

In addition to regulatory or governmental requirements, you may be subject to requirements


from industry groups, standards bodies, or even insurers. Each certifying, regulating, or
industry-related company that interacts with your organization may have its own set of rules
and regulations.
Often these governmental rules require different consideration than the rules that are established for the rest of your enterprise. Regulatory rules exist to meet governmental standards
or rules, while your enterprise will have developed rules to meet its own goals. If possible, it
helps to conform all your servers to the same set of rules. This makes it easier for everyone to
both administer the servers and understand the way each server works. This may not be possible for some applications that have conflicting requirements. For example, your accounting
systems may be bound by requirements for auditing that are mutually exclusive from other
systems that require a high degree of privacy for the data. The following are a few example
requirements:

Analyzing and Designing Security | 89

All logins must be mapped to Active Directory accounts.


Customer Social Security numbers must be encrypted as per government regulations.
All data access to the medical database must be audited.
Only bonded individuals can be assigned system administrator privileges as per insurance guidelines.

After you gather the requirements from all sources, be sure to document any existing security
settings on your SQL Servers. These may or may not be in conflict with the requirements,
but in designing a security plan, you should consider the current environment. Have
mitigation plans handy for any changes to be sure that the databases remain available and
functional to users.
Before examining how youll use these requirements, you must understand the security scope
in SQL Server.

Case Study: Gathering Requirements


Youve been assigned the task of architecting a new infrastructure for the SQL Server
2005 upgrade at a U.S. pharmaceutical company. To ensure that your design complies
with all applicable requirements, you schedule interviews with the chief operations officer
and his staff as well as the senior researchers.
Youre informed that you must adhere to a number of requirements: 10CFR15 as mandated
by the U.S. government, Sarbanes-Oxley requirements for the company as a publicly held
entity, and various insurance requirements to ensure worker and customer safety.
The process of complying with these regulations means you must validate every security
decision against all the different requirements. An internal committee of employees will
check your plans compliance when youve completed it.
Once youve made the necessary decisions, you need to ensure that a representative from
each body whose requirements youre meeting audits the plan and documents compliance
with or deviation from each of their requirements.

Understanding Security Scope

REF

External Windows
serverlevel security will
be dealt with in Lesson 5
and internal server
instance and database
security in Lesson 6.

In SQL Server, security is applied at various levels, each encompassing a different scope on
which it applies. Security can be applied at the server level, the database level, and the schema
level. This Lesson will examine overall security system design for the entire enterprise.
Figure 4-1 shows the hierarchy of a SQL Server. The highest level is the server instance, which
contains one or more databases. Each database has its own users, which are mapped to server
instance level logins. Database security applies to the database container as well as all objects
within that database. Outside of the SQL Server are the Windows server and enterprise-level
security structures.
SQL Server has a four-part set of security levels: server, database, schema, and object. The
schema level was introduced with SQL Server 2005. A schema is essentially a container of
objects within a database; a single database can include multiple schemas. SQL Server 2000
blended the objects owner and a schema to form a multipart naming system for objects. Thus
dbo. TestTable and Steve. TestTable were two different objects. However, the owner, Steve in
this case, was bound to the objects, and it was cumbersome to remove the user Steve.

90 | Lesson 4
Figure 4-1
SQL Server hierarchy

REF

Lesson 6 discusses the


permissions for separate
schemas.

SQL Server now separates the schema from the owner in the database. As youll see later in
this textbook, this difference allows you to meet the security needs of the application without
imposing a large burden on the database administrator.
SQL Server also has a number of encryption capabilities along with a more granular permissions structure that enables you to meet most any security requirements for your enterprise.
Youll learn about these encryption capabilities in later sections as you develop a database
security plan.

Analyzing Your Security Requirements


Once youve gathered all the security-related requirements for your database(s), you must
begin to analyze how they affect your SQL Server. You can meet most requirements in a
variety of ways, and by examining your applications various needs, you can choose the
appropriate SQL Server vehicle.
The first step in examining security requirements is to determine the scope of each
requirement. This Lesson looks at database-level security; subsequent Lessons will examine
other scopes. A requirement should fall into one of the scopes described in Table 4-1.
Table 4-1
Security requirements scope
criteria

S COPE

C RITERIA

Server level

Anything that references the login to the SQL Server instance or involves
the configuration of the instance. Authentication of an individual or
service is addressed at this level.

Database level

Requirements that address the storage of data in a database, encryption


of data, or the security of all schemas contained in a database.

Schema level

Application-specific requirements that deal with access to specific SQL


Server objects (tables, views, stored procedures, and so on) that will be
stored in the same schema and accessed separately from other objects in
another schema.

Service level

Requirements that address the security of a service, HTTP endpoint, or


Service Broker queue.

Analyzing and Designing Security | 91

You should classify each requirement as needing attention at one of these levels. The specifics
of these levels are addressed in later Lessons.
Because requirements for security can be general and encompass many different areas, its difficult to provide a comprehensive list that specifies where requirements fall. Table 4-2 gives
a few examples of requirements at the various levels to show how your analysis can classify
sample requirements.
Table 4-2
Sample security requirements
classification

R EQUIREMENT

C LASSIFICATION

Login security must be integrated with


Active Directory.

Server level

It must be possible to deny a particular login


access to the server if necessary.

Server level

Developers must have read-only access to


production database systems.

Database level or schema level, depending


on the design of the database

Web services used for reporting to clients


must only have access to the invoice portion
of the Sales database.

Schema level if these tables are separated


from others by schema; otherwise,
database level

Service accounts must be unique for each


instance/service combination.

Service level

No user should own any tables.

Schema level

Developers should be able to manage all


objects in their development databases.

Schema level

Items that fall at the database or schema level need to be addressed and considered when any
database design changes are made. Your security architecture must be followed during the
fundamental development of objects.

REF

Lesson 7 deals with


object-level security issues.

Dealing with Conflicting Requirements


Many companies have only one set of security-based requirements; some have none. In
such companies, its unlikely that youll have conflicting requirements. However, in companies that must follow governmental regulations or guidelines from standards bodies, its
possible that security needs will conflict.
For example, suppose you work for a medical firm that manages records for a series of hospitals. Privacy regulatory requirements dictate that patients Social Security information
be encrypted, but your application relies on this data when searching patient information.
Because encrypting these columns would prevent them from being indexed, you determine
that this approach is unacceptable; instead, you decide to encrypt the patient names, which
arent used for searching. Doing so is in conflict with the explicit requirements.
As a designer of the enterprises security infrastructure, you most likely need to seek guidance
from your superiors or executive management about how to proceed. Making a decision on your
own may not lead to a result that meshes with the desires of your companys leaders and may
end up costing the company financially. It can also be hazardous to your career! In this example,
you should approach your firms executives with your reasons for making the security decision to
encrypt the patient names. One reason to make this decision is that patients privacy is still protected, because their names are encrypted. However, because this action could be construed as a
violation of the regulatory requirement, the company has three choices: agree that this possibility
is acceptable, seek approval from the regulatory agency, or change the application.

92 | Lesson 4

CERTIFICATION READY?
Be prepared for exam
questions giving you
choices on conflicting
requirements. Pay
attention to stated
objectives and their
importance.

WARNING If you arent


a corporate officer, then you are
somewhat shielded from legal
responsibilitiesbut you arent
completely absolved of responsibility if you dont meet regulations.
Losing your job is one thing; going
to jail is something else entirely.

You should make decisions yourself as much as possible; but when youre faced with mandates or directives that conflict with one another, you need to seek resolution from those in
charge of the companyespecially if the decision is made to stray from regulatory guidelines.
Company leaders often have a working relationship with standards bodies or governmental
offices and can adapt the requirements to meet your companys needs.
If youre forced to choose between conflicting requirements yourself, understand the implications of ignoring any particular set of rules. In making your decision, you should meet
all requirements to the greatest extent possible, but understand that governmental regulations usually are more important than corporate or certification ones. Penalties for ignoring
requirements that have been written into law or codified by a governmental office can be
financial woe for your company and may result in incarceration.
If youre choosing between your corporate mandates and the guidelines of a standards body or certification (such as ISO 9000), you should follow your corporate mandates. This is a general guideline; make sure you have the permission of your companys executives to proceed in this manner.

Analyzing the Cost of Requirements


Not all requirements you gather will be implemented on your SQL servers. Regulatory
and mandatory requirements will be adhered to, but there may be requirements that the
business would like to impose but chooses not to for cost reasons.
Every security decision you make has a cost. It isnt necessarily a monetary cost, such as the
purchase of a piece of auditing software. It can be a cost in terms of time (RSA 2048-bit
encryption takes too long to complete with current technology), in terms of effort (requiring
two-factor authentication will result in too many errors from users), or in terms of another
resource. As the designer for your SQL Server infrastructure, you need to weigh the costs and
benefits of each decision to determine whether its worth pursuing.
Financial costs are simple to determine via price quotes from vendors and suppliers, licensing
costs based on existing installations or user counts, and so on. You can generally gather this
information easily and use it to determine the amount of money that your company must
spend for security items. Make sure to assign these direct dollar costs to each particular item.
Nonfinancial costs are difficult to establish, and youll have to decide how your company will
assign the value of those costs. You need to allocate a value in dollars (or some other currency)
so that you have a method of measuring these expenses along with other costs. You can do this
in a number of ways, almost all of which require that you consult with the people and departments that will be affected to gather information about the impact from a particular decision.
Time is an easy cost to determine. Often, the time an event takes can be translated into an
expense based on the cost of the resources involved. Each employee has a cost that can be
divided out to determine the per-minute value of his or her time. Security decisions often
impose a burden on people that equates to time spent on some activity, so its relatively simple
to determine the security cost of a particular decision.

TAKE NOTE

LAB EXERCISE

Perform the exercise in your lab


manual.

When you examine the cost of time, include all the people involved. For example, a password change resulting from a security decision to expire passwords results in the use of the
time of at least two people: the person deciding whose password must be changed and the
person making the change.
Other costs, such as increased time for customers or clients to use your system, their desire
or ability to work with your system, or even potential costs for others to integrate with you,
must be estimated by someone in your organization. The sales department may need to examine your requirements and determine the opportunity cost of a decision on the companys
overall ability to generate revenue.
In Exercise 4.1, youll determine the time cost of resetting passwords.

Analyzing and Designing Security | 93

If need be, you can extrapolate this number to other numbers of employees based on the
expected growth or shrinkage of your workforce. For example, what is the cost for 10 people if
the average salary is $40,000? What is the cost for 20 people if the average salary is $40,000?
This cost analysis section of your design is purely subjective, based on the business in which
youre working. Youll need to solicit feedback from others in the business when you make
your calculations and also review your results with them to be sure youre correctly accounting
for the costs of your changes.

BENEFITS
The cost analysis of your security design also has another aspect: the benefits analysis. Each design
decision, from password policy to encryption to the use of roles in your SQL Server, brings a
security benefit to your enterprise. The results may include lower risk of data loss, better marketing material to help sell your product or service, or a time savings that affects an employees job.
When conducting the cost analysis, make sure to consider the benefits and point them out in
your security plan. Too often, security is seen strictly as a cost, without including the benefits
that result from implementing a particular technology or process.

TAKE NOTE

The benefits of a security policy can be hard to quantify. Extra attention paid to security is
frequently used as a marketing tool to showcase companies. Be sure you communicate the
positive aspects of your security plan to the marketing or sales department.

RISK FACTORS
Closely tied to the cost analysis for many items is the risk that some event will occur. For
example, suppose you determine that a SQL Injection attack on your SQL Server will result
in an average loss of $5,000 in time, product, investigation, and so on. However, using
industry data and past experience, you conclude that there is only a 1 percent chance of such
an attack each month.
The analysis for this event needs to calculate $5,000 at a 1 percent risk level, or a $50 per
month average loss. Any benefits or costs associated with preventing this event should be
compared against the $50 per month value rather than $5,000 per month.
Such risk factors can be hard to determine, but your insurance company can most likely help
you. The insurance industry is built on statistical analyses of various events and the probability that they will occur. In most cases, security deals with an event that causes a breach and
results in a loss of money; your enterprises insurance company can help to quantify the actual
risk levels and the cost or benefit you should assign to any given design decision.

Integrating with the Enterprise

THE BOTTOM LINE

SQL Server uses two methods for authenticating logins to the server: Windows authentication
of users using Active Directory (AD) or local Windows users, and SQL Server authentication
using a name and password. Your company may use other methods of authentication, such
as RADIUS, Novells Identity Manager, or other enterprise identity-management software.
These two, however, are the only ones available to SQL Server, and youll need to choose one
or both of them for your integration efforts.
Windows authentication in a domain environment uses Active Directory and works with the
users and groups youve already set up in your Active Directory database. You can add users and
groups as logins for your SQL servers, and the users credentials will automatically be checked
against Active Directory when they attempt to log on; they wont need to reenter their password.
In contrast to Windows authentication, SQL Server authentication stores the login name and password in the servers master database. To log on to the server, users supply the name and password,

94 | Lesson 4

which are matched against the values stored in SQL Server. Each time a user logs on to the server,
he or she must supply a name and password for the connection.

Choosing an Authentication Method


In deciding which authentication method youll use in your SQL Server infrastructure,
you should consider your enterprises existing environment. If AD is already present and
widely deployed to the clients that will connect to SQL Server, then this is the preferred
method of attaching to SQL Server. In this case, you should disable SQL Server authentication on your servers to reduce the surface area of attacks. Without SQL Server authentication, there are fewer ways to connect for all users, including intruders.
TAKE NOTE

Password policy enforcement is available only


on Windows Server
2003 and 2008.
CERTIFICATION READY?
Know when to use
the different types of
authentication. For
example, can a Vista
Home Premium computer
user log in using an
Active Directory user ID?

LAB EXERCISE

Perform the exercise in your lab


manual.

This type of authentication offers the advantage of tying access to SQL Server directly to an
individual who has accessed other resources on the network. It also simplifies access because
the user doesnt need to remember a separate account and password combination. The underlying AD infrastructure and the client can automatically authenticate the user. This approach
also ensures that Windows password policies are enforced and the user password is periodically changed for all resources.
If you have clients that cant use AD and must authenticate with a name and password, then
youll need to enable SQL Serverauthenticated connections. This method isnt enabled by
default and must be changed for each server during or after installation.
Non-Windows clients or applications using a technology such as Java or Perl that doesnt support the Windows authentication technology will require you to enable SQL Server authentication. This approach adds an administrative overhead of managing a second set of users and
passwords that is separate from your enterprise list of users.
Choosing SQL Server authentication doesnt disable Windows authentication. Your choices
are Windows authentication only or both SQL Server and Windows authentication.
In Exercise 4.2, youll learn how to change an authentication mode.

Setting Up Using Groups and Roles


Regardless of whether you choose to use AD as an authentication mechanism, you should
follow its model of users being assigned to groups and rights being granted to those
groups. This is a fundamental principle of Windows and SQL Server security and one
that your design should adhere to in philosophy.
Because most enterprises have multiple applications, the specific policies or rights granted for a
particular application should be documented with that application. However, your overall design
should ensure that roles are created for tasks and that the groups of tasks common to a particular
job are bundled together into a role. This is possible with the use of AD Organizational Units
(OUs) and groups, which let you assign groups as members of higher-level groups.
This doesnt work, however, with internal SQL Server security. A database role cant contain
other rolesonly users. This means your security must be at a less granular level when creating roles and assigning permissions.

UNDERSTANDING KERBEROS
Kerberos is an enterprise network authentication technology that uses tickets passed between
servers and clients to authenticate users. It is part of Windows 2000 and newer Active Directory
domains, which use the TCP/IP protocols for network communication. Your SQL Server can use
Kerberos for its users as well if they are authenticated via Windows authentication. However, the
decision to use Kerberos means that all your clients must communicate with SQL Server using
the TCP/IP protocols. Other network protocols exist as well as other directory services such as
Novells Netware. These other environments obviously cannot use Windows authentication.

Analyzing and Designing Security | 95

WARNING There are a few


issues with configuring Kerberos,
so be sure you consult the documentation for SQL Server. Use sys.
dm_exec_connections to determine if it is enabled. Check BOL
Using Kerberos Authentication
for more details.

To use Kerberos, SQL Server must be registered with a Service Principal Name (SPN) in
Active Directory. This ensures that it can be managed within the Active Directory schema.
When the SQL Server service account is configured to use the local system account, the server
will automatically publish the SPN in AD for you. However, a SQL Server best practice is to
change the startup account from local system to a domain user account to better secure the
SQL Server instance. If youre using a domain user account to run the SQL Server service,
then you have to manually create the SPN for the account in Active Directory. This can be
done with the setspn utility program.
If you choose to use Kerberos as an authentication mechanism, coordinate with your network
administrators to be sure your clients can support the protocol and your infrastructure is set
up to implement it.

IMPLEMENTING ADMINISTRATIVE SECURITY


No matter which authentication method you choose, you can always use Windows authentication for your administratorsand you should choose to do so. Because SQL Server is a
Windows platform technology, the DBAs and other administrators can and should be able to
authenticate using this method.

REF

Lesson 6 discusses additional server-level roles.

All administrators for SQL Server should be configured using server-level roles and Active
Directory groups or OUs to group users together by their particular job function. Toward
this end, you should determine the different functions for which your administrators will
be responsible and then create the OUs or groups necessary for those roles, adding your
Windows users into those roles.
SQL Server lets you set the permissions for serverwide administrative functions in a more
granular fashion. Your security design should incorporate the idea of the least privileges necessary for a particular group to perform a particular function. If specific people are responsible
for adding users and logins to your SQL Server, then dont add them to the sysadmin role.
Instead, assign them the securityadmin role, and allow them to perform that function.
Your decisions about administrative security should not impose a large burden on the system
administrators. If one DBA is responsible for the server, then it doesnt make sense to create
four or five Windows groups for different functions. Just assign this person to a group in the
sysadmin role and let them manage the server.

TAKE NOTE

REF

Lesson 6 discusses
application roles.

The recommendation is that the Windows administrators group be removed from the SQL
Server sysadmin group to ensure a separation of duties and limit the ability of non-DBA
administrators to work inside SQL Server. Consider this even in small companies where
one person performs both functions; the second person performing each job may not be
the same individual, and he or she can easily be added to one group without automatically
being a member of both.

IMPLEMENTING APPLICATION ROLES SECURELY


Application roles are a database-level security tool, but one that you should consider in an
overall security plan for your SQL Server infrastructure. These roles allow a connection to
receive a set of rights different from the ones they gained after connecting to SQL Server. If
you need to secure data in applications from access outside of those applications, you should
consider implementing application roles.
ALLOWING IMPERSONATION AND DELEGATION SECURELY
As with application roles, impersonation using the EXECUTE AS keywords is more of a
database-level security feature. However, your overall SQL Server security design should
address whether this technique will be allowed on your servers. In some instances, regulatory

96 | Lesson 4

or other requirements for auditing may prevent the use of this feature. For example, in some
financial applications, a user initiates an action, such as trading a security. If this action is set
up in the application to be performed as another user, using the impersonation capabilities
of the EXECUTE AS clause, then the impersonated user will appear to have performed the
action. Because other users could share this capability, it will appear in audit records that the
same SQL Server user performed all trades, and this may violate the requirement of auditing
who actually performed the trade.
Delegation occurs when the SQL Server uses the credentials from the connection to access
other SQL Servers for a distributed query. Again, your security policy should address whether
this is allowed and what type of configuration should be used in implementing this feature.

Assessing the Impact of Network Policies

WARNING Make sure your


development servers, especially
those that contain copies of production server data, are protected
in the same way as the production
servers.

REF

Lesson 5 talks more


about physical security.

Network policies and infrastructure can have a substantial impact on your SQL Server
design of security policies and procedures. You should follow a few general policies from
a security standpoint, but many network infrastructure decisions can have a substantial
impact on the security of your database servers. Because SQL Servers tend to contain
important enterprise data, they should be protected in some basic ways. First, each SQL
Serveralong with other important network serversshould be physically protected in a
locked, controlled access room. Your network policy should include this type of mandate.
Make sure this is the case.

This protection of the servers should also extend to the backup systems, whether disk or tape.
A number of security breaches involving database systems have occurred when backup tapes
were compromised. Encryption technology as well as physical protection needs to deal with
any removable media used for backup of your SQL Server data.
In addition to being physically protected, all SQL Servers should be logically protected at
the network level by firewalls. All connections to the server occur through network access,
even those from the local server console, so a firewall helps to ensure that only legitimate
clients are allowed to access the SQL Server. The network infrastructure team should be
aware of all SQL Servers and the access requirements of clients to configure the appropriate
firewall rules.
As you deploy SQL Servers or work with the existing environment to better secure your
instances, youll work closely with the network team in the placement of your instances
within the network. In many cases, youll want to place your SQL Servers in a central
location to ensure quick response times for clients while protecting them to some extent
from unauthorized access. For SQL Servers that provide data to Internet-accessible systems, this often means placing them in a demilitarized zone (DMZ) on the network.
However, you may also place them near other servers that are segregated from desktop
clients on the network.

CERTIFICATION READY?
Suppose that a new
remote corporate
location should have
users access the central
corporate SQL Server that
is behind the corporate
firewall. How could
this work? What might
need to be changed or
specified?

This configuration on the network may also extend to internal routers as your network grows.
Because large networks usually contain a number of subnets, the appropriate traffic should
be blocked or allowed through to SQL Servers based on the need to access that information.
SQL Servers that are used for storing data that isnt accessed by the enterprise, such as those
used for auditing systems, monitoring systems, and so on, should be behind routers or firewalls configured to block random access from clients.
The specific traffic requirements of SQL Server, TCP/IP connections versus named pipes,
Secure Socket Layer (SSL) connections, encrypted traffic, and so on, will require that your
DBAs and the network administrators work closely together to ensure that inappropriate
access doesnt take place and appropriate access is granted.

Analyzing and Designing Security | 97

In addition to the protection of SQL Server, you need to work with the network team on taking advantage of SQL Servers various features. Many of the capabilities of SQL Server require
certain configurations of the network and access beyond port 1433 for clients. Table 4-3 lists
some of the capabilities requiring network interactions.

Table 4-3
SQL Server features requiring
network configuration

WARNING Microsoft has


implemented many ways of
accessing the network both into
and out of SQL Server. However,
each method you implement
greatly increases the surface area
of attack. You should use them
cautiously.

R EASON

Named instances

Named instances work with ports other than 1433. To


ensure proper security, each should be assigned
a port that is configured in a router or firewall.

SSL encrypted connections

SSL connections should use a certificate procured


for and assigned to your enterprise by a trusted
authority. The network team usually acquires and
installs these.

SQL Server Integration Services (SSIS)

Many of the connections available in SSIS require


network access such as web service connections.
These should be appropriately configured in a
firewall.

Common language runtime (CLR)


programming

By default, CLR assemblies are disabled on the


server. If they are installed on the server with
explicit permissions, by default they cant access
resources outside the server. If permissions are
required to access objects outside the server, the
requirement and implementation should be
documented in the network and routers appropriately
configured. This is especially critical for the UNSAFE
permission set, because there are virtually no
restrictions on what an assembly can do with this
level of permissions assigned.

Achieving High Availability in a Secure Way

THE BOTTOM LINE

F EATURE

REF

Lesson 10 discusses the


design of high-availability
systems.

Many businesses want their SQL Servers to be available 24 hours a day, 7 days a week,
every week of the year. SQL Server includes a number of new and enhanced features to
help businesses achieve a highly reliable database server. However, many high-availability
(HA) solutions can impact your security design, because youre essentially spreading the
security of a single system across multiple servers.

You can design a highly available system in a number of ways, and each requires different
security considerations in SQL Server. Table 4-4 lists these technologies and some of the
security ramifications.

98 | Lesson 4
Table 4-4
High-availability security
considerations

HA T ECHNOLOGY

S ECURITY I MPACT

Clustering

A clustered solution has two or more installations of


SQL Server that are set up separately but that present
a single logical instance of SQL Server. The security
access set up on these underlying instances must be
consistent so that DBAs and other administrators can
work with either database instance. The accounts for
the instance itself also have restrictions.

Database mirroring

Setting up mirroring usually entails two or three


SQL Server servers, which need to communicate
with each other. This means authenticating the
connections between them. In addition, logins for
users must be available on both servers. Security
policy should ensure that changes on one server
are propagated to the other. The servers can be set
up to use Windows authentication or a certificatebased authentication. The capability exists to work
outside the domain structure. You should consult the
database mirroring documentation if implementing
this feature. Note that mirroring was not available as
a supported technology in the initial release of SQL
Server 2005. Database mirroring is available with the
release of Service Pack 1 for SQL Server 2005 and
with SQL Server 2008.

Replication

This isnt specifically a HA technology, but many


companies use replication to move data between
servers as protection against a system failure. Logins
and security should be set up the same way between
replicated databases if they are used as a HA tool.
The replication agents also need security to be set up
in keeping with the principal of least privilege necessary. These agents can be running under separate
security accounts and should be configured as such
with the minimum necessary permissions.

Log shipping

Log shipping uses the SQL Server Agent service to


perform backups, move files, and perform restores.
The SQL Server Agent service account is the default,
but you can configure a separate proxy for this
purpose. A separate proxy is preferred with only
the permissions necessary to perform the log
shipping functions. Only sysadmins can implement
log shipping.

Manual backup and restore

Administrators need administrative permissions and


rights to access both the backup servers and the
backup media (disk or tape).

Analyzing and Designing Security | 99

No matter which HA technology you choose to implement (if any), you need to make sure
your security policy covers the permissions and policies for each of them. Each will have different security requirements, and its easy to forget to properly secure them. Not assigning
tight security to your backup systems can result in vulnerabilities to your business, and assigning very restrictive security policies can result in the failover systems not being available when
they are needed.
Implementing any of these HA technologies means your security policy must account for the
corresponding needs and requirements. Most of these technologies require the existence of an
Active Directory domain in order to achieve the authentication required between the servers.
Your policy should specifically address which ones and how the authentication mechanisms
will be implemented in your enterprise.

Mitigating Server Attacks


Most of your security policies will be developed to handle internal separation of duties
and prevent accidental integrity problems from untrained internal users. For most of your
users, security should be a transparent entity that they dont consciously interact with.

REF

Lesson 7 gives more


details about data security
design.

However, many threats to your data are malicious in nature. The mainstream press tends to
portray hackers on the Internet as a great threat to your servers, but there are also corporate
hackers who can compromise your specific servers. These may be consultants, competitors,
disgruntled employees, or any other individuals who can physically enter your business or
interact with your employees. Generally, internal threats are greater than external threats.
In either case, security should be a barrier that prevents these individuals from gaining access.
Your security should prevent them from changing or accessing data and, properly designed,
should prevent them from even knowing the data is there.
You need to implement two types of security to mitigate attacks on your server and design at
least one process into your overall security plan. You must design detailed technical security
in setting permissions, assigning roles and rights, and integrating with enterprise and network
systems. You also need to ensure that administrative security policies are in place to prevent
social engineering practices from being successful. Finally, you should ensure that the Surface
Area Configuration Manager tool is run on every installed SQL Server and any vulnerabilities
or warnings that crop up are addressed in your policy.
Technical security is the easy part of this exercise. Network access to the server should be
controlled by properly configured firewalls and routers as well as integration with enterprise
authentication. Password policies, the use of roles for rights, and proper data security designs
will address these technical requirements. Each of these items is addressed in later Lessons in
this book. Proper application design to prevent SQL injection is also important.

WARNING It isnt just outsiders who gain access by social


engineering. Employees sometimes
engineer additional access for
malicious reasons. Your policy
shouldnt have exceptions, but
it should include auditing in the
event that an exception is made.

Administrative policies for security are much more difficult to enforce and train people to
use. Many of the security compromises that occur in business do so because of socially engineered access. With social engineering, employees are often tricked into trusting an outsider
and granting access or disclosing the details of their account. Social engineering of passwords,
access rights, or any other circumvention of security policy is difficult to guard against.
Employees are naturally trusting of each other, and they have a tendency to shortcut rules to
help other employees. Skilled hackers can use this natural tendency to trick an employee into
giving them access they shouldnt have.
Constant testing of employees adherence to policy and penalties for bending rules is required
to prevent social engineering attacks. Because this is an administrative security policy rather
than one implemented using computer tools, you should consult with your Human Resources
department about what is and isnt allowed as a policy.
You need to be aware of a few different types of attacks and plan for them.

100 | Lesson 4

PREVENTING SQL INJECTION ATTACKS


The number-one external attack on databases is through SQL Injection, especially as more
databases are used to power Web sites that anyone with an Internet connection can access.
SQL Injection occurs when someone submits additional SQL code into a field where an
application is expecting only data and then uses a statement terminator to execute a second
statement. For example, suppose you have a Web site that asks for a name and password.
After a user submits the name Bob and the password 1jdrk4, the following SQL is executed:
SELECT Name FROM Users WHERE Name = Bob AND pwd = ljdrk4

This SQL code verifies whether the name and password submitted are correct if a row is
returned. Thousands of applications have been built using this technique. The problem
occurs if an attacker submits something like the following in place of the password:
Mypassword; SELECT * FROM users;

In this case, when the variables are substituted into the previous SQL statement, this happens:
SELECT Name FROM Users WHERE Name = Bob AND pwd = Mypassword;
SELECT * FROM Users;

Note that now two statements are executed: the one that is expected and a select to return all
data from the Users table. Depending on the structure of the application, the attacker could
conceivably gain information about all users on the system. Additional code has been injected
into the SQL statement.
This vulnerability mainly comes from building a SQL statement in an application and executing it. The recommendation to use only functions and stored procedures goes a long way toward
preventing SQL Injection attacks by encapsulating the code inside another structure and using
variables in your code instead of building a statement with the strings stored in the variable.
You should develop a policy for all your application development work that seeks to minimize
SQL Injection vulnerabilities by using only precompiled modules and requiring all code to conform to best practices. If possible, you should specify that dynamic or ad hoc SQL not be allowed.

MANAGING SOURCE CODE


With the addition of modules like .NET assemblies as a method of writing functions, userdefined data types, and other SQL Server objects, there is a new surface through which attacks
can be made. Developers often use a module that works for them without necessarily requiring
the source code. Because source code often raises the price of an assembly, many companies
purchase components for a specific piece of functionality without obtaining the source code.
This policy opens you up to potential backdoors written into the code as well as potential
SQL Injection attacks, overflow buffers, and more. A good security policy requires code
reviews of all source code by someone other than the developer to ensure that it conforms to
your standards as well as best practices. This includes purchased assemblies as well as internally developed code.

GUARDING AGAINST HTTP ATTACKS


SQL Server includes the ability to generate web services as well as send and receive HTTP
traffic from outside sources. This means attackers can directly seek to cause buffer overflows
or take advantage of any vulnerabilities that may be discovered in the web services protocols.
To ensure that the security of your database server isnt compromised, access through this protocol should be limited to those machines or subnets that require this information. In addition, you should ensure that administrators are watchful for any issues with the http.sys driver
in Windows or any web service attacks.
THWARTING PASSWORD CRACKING
Prior to SQL Server 2005, some tools could perform a brute-force dictionary attack on SQL
Server logins with access to the syslogins table. These tools worked extremely fast, cracking
almost any password within a few hours.

Analyzing and Designing Security | 101

Although such tools havent proven that they work on SQL Server 2005, its possible that
some such tool will be developed. Using a strong password policy that forces changes on a
regular basis can help to thwart this type of attack. You should also set access controls to limit
the ability of regular users to access the syslogins table.

Protecting Backups
In addition to the database server, you need to secure the backup data extracted from
SQL Server in the event of a disaster. Most companies use offsite storage for their backups, whether tapes in a temperature-controlled environment or real-time disk backups in
another location. This data must be protected just as strongly as the production backup,
because it contains a copy of your databases at a point in time. Many news stories in
recent years have described how backup tapes containing sensitive data have been pilfered.

Your policies concerning backups should ensure that access to them, whether on physical
media or on a file system, is limited to those individuals who require this access (usually
system administrators). In addition, with privacy and data security laws being enacted, data
given to developers for testing purposes should be obfuscated in some manner.
The password features for SQL Server backups may prevent their restoration by an attacker;
however, the files themselves have been shown to contain the data in clear text. A third-party
encryption product or the Windows native encryption features for the file system should be
used to prevent anyone from accessing this data if they manage to obtain the files.

Auditing Access

THE BOTTOM LINE

CERTIFICATION READY?
Your Windows OS and
SQL Server maintain
multiple log files. Which
log maintains auditing
data? Can you identify
other logs and their
purposes?

One of the main ways in which you can measure the effectiveness of your security policy
is by examining its effectiveness over time. This requires that you implement auditing that
both tracks changes made to the individual SQL Servers and catches any attempts at inappropriate access.

Just as a database can track changes to data made over time, your security design should
include provisions that track configuration changes; the addition of users, roles, or other
objects; and security changes made on the server. Preferably, you should use automated
methods to track changes, such as data definition language (DDL) triggers, which can provide
a record of security related alterations.
System errors and alerts should be noted as well, because they often indicate when an attack
has taken place or the database server is inappropriately configured. SQL Server Agent can
notify administrators and should be configured to do so when errors are trapped or alerts are
fired. A good policy ensures that important items in your environment are monitored.
Your design also needs to provide some method of tracking unauthorized attempts to access
your SQL Servers. These access attempts can help gauge how much effort is being put into
testing your security. A network-level intrusion detection system (IDS), automated Profiler
traces, or internal SQL Server alerts can be helpful in providing an audit trail.

TAKE NOTE

If you have automated processes or applications that connect without a live user, expired
passwords can cause failed attempts that look like an attack on your server. Auditing will
help you track down these applications and determine whether there is a configuration
issue or a real attack.

102 | Lesson 4

As with all other security design considerations, the costs and benefits should be factored into
your final design and presented to business leaders and executives.

Making Security Recommendations

THE BOTTOM LINE

Once youve performed your cost analysis, you need to make recommendations for a security policy at some level. It may be for the entire enterprise or for a single instance of SQL
Server. In either case, your decisions need to be based on your sound judgment that they
meet the necessary requirements (whether regulatory or mandated by the business). Your
recommendations should also balance the desire to meet other requirements with the cost
to the business of implementing them. Choosing to require password changes every day
costs too much for most businesses and wouldnt be a recommendation for a retail chain.
But for a very high security situation, it might be acceptable.

Although youre gathering information from many people throughout this process and may
be sharing your decisions or thoughts about security with them, you need to create a formal
document that lists the specific security recommendations youve decided on. This should be
a complete and final recommendation of how your SQL Server security infrastructure will be
implemented, and it should include the touchpoints with other groups (such as the network
group) along with your requirements for them.
Your document should describe the rationale for your decisions as well as the impact of those
decisions on the various parts of the business. As much of the cost analysis as possible should
be included in this document. The data you supply and the costs, benefits, and risks should
be used by business leaders to determine how to proceed.

TAKE NOTE

Make every effort to present a complete and final document. Business executives and
affected leaders may overrule some sections and want them changed, and you should
incorporate their feedback into your document and resubmit the plan. However, when you
make your submission, it should be complete, not a draft.

Performing Ongoing Reviews


THE BOTTOM LINE

Security is a constant process, unlike many other computer functions that can be configured and left alone. Your design should be encompassed in a living document that changes
and flexes according to the changing environments of SQL Server and your enterprise.
In securing your SQL Servers, you may make choices that other departments dont find
acceptable, given the impact on their business area. Have alternatives for them along with
the risks and costs associated with changing your plan. Security is always a balance between
protecting the integrity of and access to your data and enabling the business to function
smoothly.
As a part of your design, you should include a time frame in which the design itself is
reviewed. Doing so ensures that the design evolves over time to meet the changing environment it governs. Typically this review happens every year, but the time frame can be longer or
shorter depending on your particular requirements.

Analyzing and Designing Security | 103

S K I L L S U M M A RY
Developing a comprehensive security plan for SQL Server isnt a quick or easy process. It
depends on the requirements of your particular business, both internal and external. This
Lesson has guided you in the areas that are important and given you ideas for the items you
should consider in a security policy. However, because the requirements of each organization
are unique, its difficult to specifically determine which policies you should implement.
One guiding principal in all your security design is that each role should have the least
amount of privileges required to complete an appropriate task. Its always possible to assign
multiple roles to a specific individual, but by separating these roles into different groups,
you can more easily ensure that your policies are followed and that rights are granted and
revoked as users move in and out of roles. Windows authentication is
preferred for this reason.
Security is an ongoing process. It requires review as well as auditing of both actions and compliance. Your design should specify methods that let you audit the security of your SQL Servers
to be sure they are properly configured and that inappropriate access isnt taking place. Your
design should also specify a time frame in which the plan is reviewed so that it continues to
meet your requirements.
For the certification examination:
Know how to gather security requirements. Understand the different types of requirements and their order of importance.

Understand the various ways network policies impact SQL Server. There are a number of
places that network polices impact the security plan for SQL Server.

Know the implications and benefits of choosing an authentication mechanism. There are
two choices for SQL Server authentication, and you should understand the differences
between them.

Know how to analyze the risk of attack to your SQL Server and mitigate any issues.
Understand the types of attacks, both computer and social engineering based, and
strategies for mitigating them.

Understand how to examine the true costs of your decisions. Each decision has an associated cost and a benefit as well as a risk factor. Know how to include these in your security
design.

Knowledge Assessment
Case Study
Delaneys Simulations
Delaneys Simulations is a company that provides event scenarios to police and military
organizations for training purposes. The events are mocked up at their facility. The
actions of trainees are recorded and evaluated, and the data is stored in a SQL Server
2005 database.

Planned Changes
The company is growing and wants to make the results of the simulations available to
clients on the Internet, but there is some concern over security. The privacy of the trainees as well as the clients must be maintained, and the results must not be disclosed to

104 | Lesson 4

any unauthorized individuals. A new security policy must be designed. The senior DBA,
Dean, at Delaneys Simulations is tasked with developing policies for the database servers.

Existing Data Environment


The company currently uses two SQL Server 2000 servers. SQLTest is used to test the
simulations in conjunction with the SimTest server that hosts the ASP.NET application.
Matching SQLLive and SimLive servers are used in the actual simulations.
Two new SQL Server 2005 servers will be set up as SQLWebTest and SQLWebLive for
the web-based environment that will provide data to the companys clients. These servers
will receive data from the simulation servers using SSIS.

Existing Infrastructure
Currently, Delaneys Simulations uses SQL Server authenticated logins for the developers as well as the simulation applications. These applications have been in use for some
time, and a former developers account is used for all the connections.
An Active Directory domain is used for employees and servers as a central point of
authentication. There is only one OU currently, but a second one is planned for the
servers that will be exposed to the Internet.
Additional firewalls are planned for the Internet connection and as a way to segregate
the servers used for the Internet from the other internal servers and clients.
All IT personnel have complete access to the development servers. Only a few people
have access to the production servers.

Business Requirements
Although the company expects that the results of all simulations will be available to
clients on the Internet, there is an understanding that privacy and security requirements
may prevent that. Developing a strong security policy is critical to the companys continued success.
Additional time has been allocated to the development team to make changes that will
provide better security. One possible consideration is the deployment of a .NET client
application to all the companys clients and forcing all data access through this client
application. If the security requirements dictate this approach, the project will proceed.
Because the company has a number of government clients, some regulations relating to
industries in the Code of Federal Regulations (CFR) apply to Delaneys Simulations.
These regulations require the auditing of all access to any persons result data as well as
controls placed on who inside Delaneys Simulations can access this data.

Technical Requirements
The new servers will be placed in a separate OU, and specific accounts will be required
for the SSIS transfer of data.
Firewalls will be installed, and only access through specific routes to specific machines
will be allowed. The only exception is the Internet web server.
It has been decided that as many SQL Serverauthenticated accounts as possible need
to be replaced with Windows Integrated logins, because corporate policy dictates that
all access be granted by the AD domain.
Access through the Internet should entail as few permissions as possible and should
provide the best security that the IT group thinks it can provide.

Analyzing and Designing Security | 105

Multiple Choice
Circle the letter or letters that correspond to the best answer or answers.
Use the information in the previous case study to answer the following questions.
1. The internal development team members insist that they need to have a copy of the
production servers data in their test environment. However, the CFR regulations seem
to prohibit this, because the developers are not a group that needs access to individuals
data. How should the security policy address this request?
a. Set a corporate policy to override the CFR regulation and allow developer access.
b. Prohibit developer access as required by the CFR regulation, and force the developers
to build their own test data.
c. Allow the developers to receive copies of the data from the production server, but
require obfuscation of the individual information to satisfy the CFR regulation.
d. Allow each client to determine whether they see the need to comply with the CFR
regulation.
2. To comply with the companys security requirements, what should be done about the
developers access to their test server?
a. Their Windows logins should be added to the test servers with the same rights as
their old SQL logins, and their SQL logins should be deleted.
b. Their Windows logins should be added to the test servers with the same rights as
their SQL logins.
c. A central application role should be created for all developers.
d. One Windows login should be created for all developers to share.
3. It is decided that the risk of data compromise should be reduced by limiting the rights of
the Internet application to access data. Which of the following would be the best policy
to choose?
a. Set up a Windows account for IIS to use, and grant this account login rights to the
SQL Server and the tables that it needs.
b. Set up a Windows account for IIS to use that can only log on to the SQL Server. Use
an application role that the ASP.NET application can invoke to get rights to individual tables.
c. Hard-code the information for a SQL Server login into the application to use.
4. Company management prefers that only the DBA or the DBAs backup be allowed to
deploy changes to the production environment. What two policy changes should be used
to enforce this?
a. Limit access to the production servers to only those individuals machines using the
firewall.
b. Distribute a memo to all employees outlining who can access the production servers.
c. Disable all network access, and distribute keys to the data center to the individuals
who will access the production servers.
d. Only grant access to the production servers to the appropriate individuals
ActiveDirectory accounts.
5. Because the CFR requirements can be amended every year, what policy should be set in
place?
a. All past implementations are expected to be grandfathered, so nothing should be done.
b. One of the IT employees should be designated to review the CFR requirements each
year and determine whether application changes are needed.
c. The application should be rewritten every year to ensure it complies with the current
regulations.
6. To ensure that the auditing requirements are met, what type of policy should be set up?
a. Develop corporate guidelines that outline how auditing should be built into any
application that accesses the data.
b. Because only one application currently accesses the data, build auditing into the
ASP.NET application.

106 | Lesson 4

c. Require that the database log all accesses, and force queries to use stored procedures.
d. Disable all client accounts by default, and enable them only after clients phone and
confirm they require access that day.
7. To ensure that only the appropriate individuals receive access, what policies should be in
place for the support personnel? (Choose as many as needed.)
a. Passwords may not be given out over the phone. Password resets must be sent to a
registered e-mail address.
b. Firewall changes to allow access from new IPs should be performed only after the
requesters identity and authority are verified.
c. No account information is to be sent in e-mail or given over the phone in response
to a request. An e-mail must be composed from scratch and sent to registered clients
addresses only.
d. Before troubleshooting problems with a client, verify their identity by sending them
an e-mail using the e-mail address on file with Delaneys Simulations.
8. The disaster-recovery plan includes switching the production and development servers if
necessary. How should security policy be handled for these servers?
a. The accounts are preset on the development machine, but they are disabled. Auditing
is set up to ensure they are not enabled without authorization.
b. No accounts are set up. They can be re-created in the event of an emergency.
c. The accounts are set up and ready in case a disaster occurs. Employees are instructed
not to use them.
d. There does not need to be any policy regarding this matter.
9. What type of account should be used to transfer data between the simulation SQL
Server and the web SQL Server for SSIS?
a. The developers account that builds the packages
b. The SQL Server database server service account
c. A dedicated domain account with rights to only those machines and the appropriate
tables
d. The local system account
10. After SQL Server has been installed on all servers, what should be done before allowing
users to access them?
a. Disable the SQL Server Agent.
b. Set up Profiler to run in C2 mode.
c. Turn off SQL Server login access.
d. Run the Surface Area Configuration Manager tool.

Designing Windows
Server-Level Security

L ESSON

L E S S O N S K I L L M AT R I X
TECHNOLOGY SKILL

70-443 EXAM OBJECTIVE

Develop Microsoft Windows server-level security policies.

Foundational

Develop a password policy.

Foundational

Develop an encryption policy.

Foundational

Specify server accounts and server account rights.

Foundational

Specify the interaction of the database server with antivirus software.

Foundational

Specify the set of running services and disable unused services.

Foundational

Specify the interaction of the database server with server-level firewalls.

Foundational

Specify a physically secure environment for the database server.

Foundational

KEY TERMS
asymmetric key: In cryptology,
one key, mathematically related
to a second key, is used to encrypt
data while the other decrypts the
data.
certificate: A digital document
(electronic file) provided by a
trusted authority to give assurance of a persons identity; certificates verify a given public key

belongs to a stipulated individual


or organization.
cryptology: The study or practice
of both cryptography (enciphering
and deciphering) and cryptanalysis
(breaking or cracking a code
system or individual messages).
encryption key: A seed value used
in an algorithm to keep sensitive
information confidential by

changing data into an unreadable


form.
services: Processes that run in
the background of the operating
system; analogous to Daemons
in Unix.
symmetric key: In cryptology, a
single key is used to both encrypt
and decrypt data.

SQL Server is a software application that runs on top of a Windows operating system
server. This means the security of a SQL Server installation depends to some extent on
the security processes that exist for the Windows server. If the Windows server is compromised, then there is a good chance that some or all of the SQL Server security can be
circumvented.

The previous Lesson looked at an overall security policy for your SQL Server infrastructure.
This Lesson moves to the more granular level of the individual Windows server. Youll learn
how many of the serverwide security parameters of SQL Server are determined and set
globally for each instance.
107

108 | Lesson 5

Understanding Password Rules


THE BOTTOM LINE

TAKE NOTE

Password policies for


SQL Server logins can
be enforced only when
the instance is installed
on Windows Server
2003 or 2008.

With SQL Server authentication, the database server can apply password policies to SQL
Server logins. This greatly reduces the effectiveness of a brute force attack on a particular
login, because the password doesnt have to remain the same indefinitely.
With SQL Server, you have two choices for authentication, as discussed in Lesson 4. With
Windows authentication, the password policies set on the Active Directory (AD) domain
govern the individual login; SQL Server is completely removed from managing this part
of security.
The options available for SQL logins, shown in Figure 5-1, can be set when a login is created
or changed later if a login is edited. In this dialog box a SQL Server login is selected, which
requires that a name and password be specified.

Figure 5-1
Password policy options

SQL Server Management Studio lists three options for this login, as described in Table 5-1.
Although one is called Enforce Password Policy, all three are part of the password policy or
rules for server security.
Table 5-1
Password policy options

CERTIFICATION READY?
These checks have
dependencies: If Enforce
Password Policy is not
set, SQL Server does
not enforce the other
two; if Enforce Password
Expiration is not set, SQL
Server does not enforce
the User Must Change
Password at Next Login.
Or, to put it another way,
you cannot select Change
Password unless you also
select the other two.

O PTION

D ESCRIPTION

D EFAULT

Enforce Password Policy

Causes the password to be


checked against stipulated
policies

Checked by default

Enforce Password
Expiration

Causes the RDBMS to


respect the password
expiration policy set on
the host server operating
system

Checked by default

User Must Change


Password at Next
Login

Requires a new password to


be set the next time the user
logs in, before any batches
are processed

Checked if a new password


is entered or by default

The first two options correspond to the same settings on Windows Server 2003 (and newer)
for user accounts. By default, a Windows server in a domain respects the domain policies set
in AD; but in either case, SQL Server follows the policy of the host Windows server.

Designing Windows Server-Level Security | 109

TAKE NOTE

The domain host server has only one policy. This means that although you set these
options individually for each instance, the amount of time before password expiration is
the same for all instances on a server.

Enforcing the Password Policy


The Enforce Password Policy check box requires that any password meet the following
requirements by default (available on the Windows operating system on Windows Server
2003 and newer):

The password must be at least eight characters long.


The password cant contain all or part of the username. Specifically, no three or more
consecutive alphanumeric characters delimited by white space, comma, period, or hyphen
can match the username.
The password must contain characters from three of the four following areas:
Uppercase letters (A through Z)
Lowercase letters (a through z)
Base 10 numbers (09)
Nonalphanumeric characters such as the exclamation point (!), at symbol (@), pound
sign (#), dollar sign ($), and so on
If a password doesnt meet these requirements, then it isnt accepted in the new login dialog
box or in the Properties dialog box for an existing login. If this option is set for the login,
then these requirements are also enforced when a login changes its password using SQLCMD
or another application.
These requirements are checked by comparing the entered password with the selected domain
or local policies.

Enforcing Password Expiration


The password expiration follows the same setting for the host Windows server logins, as
shown in Figure 5-2 on the Windows 2003 platform.
Figure 5-2
Windows 2003 password
expiration setting

When a password is set or changed, the date is noted in the master.sys.server_principals system view. This view is checked on each login to determine whether the password has expired.
If the password has expired, then users are prompted to enter a new password before they can
continue with their session.
The three password check boxes are related. You can select Enforce Password Policy
only; Enforce Password Policy and Enforce Password Expiration; or Enforce Password
Policy, Enforce Password Expiration, and User Must Change Password at Next Login.
Be aware though that these settings involve password settings on the Windows operating
system.

110 | Lesson 5

Enforcing a Password Change at the Next Login


This option is selected by default whenever a new password is entered. It forces a password change by the login as soon as the next session is established but before any batches
can be processed. This prevents the administrator who set the password from knowing
the users password indefinitely. Its a good security policy that prevents unauthorized
access by the administrator. It also ensures that the user chooses their own password,
which increases the likelihood they will remember it.

Following Password Best Practices


The best practice for any password mechanism includes all three of these elements, which
is why theyre selected by default. Your enterprise may have its own requirements, but in
general all three options should be selected for most logins.

TAKE NOTE

An exception to this policy may exist for automated services that require their own
accounts. You should still enforce the policy; but because of issues if these services fail, and
their inability to change their own passwords, the expiration and change requirements may
not be selected. This doesnt mean you should never change those passwordsbut they
must be manually changed.

LAB EXERCISE

Perform the exercise in your lab


manual.

In Exercise 5.1, youll walk through how to add a login.

Setting Up the Encryption Policy

THE BOTTOM LINE

In SQL Server 2005, the encryption capabilities have been greatly expanded, and cryptographic functions and features have been introduced throughout the platform. These features
allow the encryption of data using a variety of techniques and algorithms, which enables the
administrator to meet most security needs.
SQL Server 2000 had only one option for native encryption in your database: the one-way
ENCRYPT() function, which generated a one-way hash of a string. This enabled you to
encrypt a value for comparison with another hashed value later, in the same way Windows
encrypts your password.
This encryptions flexibility was limited, however, and data stored in this form couldnt be
decrypted back to the original text. With todays increased functionality, however, deciding
how to deploy encryption and use it in your enterprise requires careful examination of the
level of security you need versus the effort required to meet those needs.
You need to examine a high-level view of how encryption works before delving into the
details and implications of using encryption. When a user needs to read this data, they must
supply some sort of password or certificate that the server uses to decrypt the data back into
readable values. This process is fairly straightforward, but its administration is complex and
should be considered with caution before implementation.

Designing Windows Server-Level Security | 111

WARNING If you change


the service account using SQL
Server Configuration Manager,
then the service master key is
decrypted with the old password
and re-encrypted with the new
password automatically. If you
change the service account manually, then you must decrypt and
re-encrypt the service master key
manually as well, or your encryption hierarchy will be broken and
data may be lost.

Understanding the Encryption Hierarchy


Before you can make decisions about how and where to deploy encryption, you need to
examine how the encryption technologies work in SQL Server. Begin by looking at the
hierarchy of encryption and how its structured in the product.
When SQL Server is first installed, the service account password is used to encrypt a service
master key. This is done using a 128-bit Triple DES algorithm and the Windows Data
Protection API (DPAPI). The service master key is the root of the encryption hierarchy in
SQL Server, and its used to encrypt the master key for each database.
Although the service master key is automatically created, the individual database master
keys require manual preparation. The administrator creates this key for each database when
encryption is needed. The database master key, in turn, is used to encrypt asymmetric keys
and certificates that are used for encrypting specific data.

LAB EXERCISE

Perform the exercise in your lab


manual.

In Exercise 5.2, youll set up a database master key before you begin looking at the keys
available in SQL Server.

Using Symmetric and Asymmetric Keys


Two types of keys are used in cryptology operations. A full discussion of the details of
these key types is beyond the scope of this textbook, but youll review an introduction to
enable you to make some decisions regarding your encryption policy. Its assumed that
you have a basic understanding of how cryptography works and the meanings of some
common cryptographic terms.

Symmetric keys are the simpler of the two key types and pose much less of a load on the
server during encryption and decryption operations. Theyre called symmetric because the
same key is used for both encryption of plaintext and the decryption of ciphertext. This poses
some security risks, because only one key is needed for the operations and there is no way to
authenticate the other side of the cryptographic transaction. However, this is still fairly strong
encryption and is often used to encrypt the data in a SQL Server column.
SQL Server lets you use the following algorithms in symmetric key encryption:

Data Encryption Standard (DES)


Triple DES (3DES)
RC2
RC4
RC4 (128 bit)
DESX
Advanced Encryption Standard (AES; 128-, 192-, or 256-bit version)

When you create a symmetric key, you specify the algorithm to be used as well as an
encryption key mechanism. The encryption method can be the database master key, which
means SQL Server can automatically decrypt and open the key for use in encryption or
decryption operations. The default method is Triple DES if a key is secured by a password
instead of the database master key.

TAKE NOTE

SQL Server encryption relies on the Windows operating system to implement individual
types of encryption. In some cases, older versions of Windows may or may not support
newer encryption algorithms that are supported by SQL Server.

112 | Lesson 5

WARNING If you dont use


the database master key to secure
the symmetric key, then the key is
potentially secured by a weaker
algorithm (Triple DES) than the data
(if you choose RC4_128 or AES).

Asymmetric keys differ from symmetric keys because a different key is used for encryption
operations than for decryption operations. Asymmetric keys come in pairs, and each key in
the pair is designated as either public or private. These keys are more complex and require a
larger amount of resources to perform either encryption operation, placing more of a load on
your SQL Server.
The public key is distributed to users or applications, and the private key is used by the server
to encrypt data. Users submit the public key with their queries, and the server can use that
key to decrypt the data. In addition, because the keys are matched as a pair, the server knows
that a particular users public key is matched with the private key, providing the cryptographic
transaction with a level of authenticity.
All asymmetric keys are created using the RSA algorithm with 512, 1,024, or 2,048 bits in
the keys. Certificates are a form of asymmetric key and are discussed in the next section.

Using Certificates
SQL Server allows certificates to be used in encryption operations in addition to keys.
SQL Server conforms to the X.509 standard for certificates that is widely used around
the world. Certificates can be organized in a hierarchy of trust, which ensures that a
particular certificate can be used in encryption operations and also that certificates can
be traced to ensure that the holder of the certificate is a particular entity. In SQL Server,
you can generate your own self-signed certificates or use a certificate from a Certificate
Authority (CA) that is trusted by your enterprise and has signed a certificate.

Certificates are useful when you need to certify that your data is in fact your data. By
encrypting data using the private key for your certificate, a user can use the public certificate
not only to decrypt the data, but also to trace back the certificate to authenticate the server as
belonging to your enterprise. In essence, the certificate provides a digital signature.
Certificates can also be revoked and expired. This can be useful if you need to limit access
to a certain period of time or if you want to ensure periodic reissue of the certificate and
validation of its identity or authorization.

Considering Performance Issues


Encryption is a great tool for protecting your companys data from theft and unauthorized use. However, it isnt a tool that you can widely deploy to all the data in your databases. Encrypting data involves a number of performance issues that limit how widely
you can deploy this feature.
TAKE NOTE

If you use certificates,


SQL Server makes
this process easy
by including the
DecryptByKeyAutoCert()
function. This function
automatically decrypts
data encrypted by a
symmetric key that is
itself encrypted using a
certificate.

We have already mentioned the first consideration in the earlier discussion of symmetric and
asymmetric keys: Asymmetric keys provide more security but at the cost of slower performance for both encryption and decryption operations than symmetric keys. For this reason,
asymmetric keys generally arent used to encrypt data, but rather are used to encrypt symmetric keys. The symmetric keys are then used to encrypt the data.
This hybrid method of using cryptographic techniques can be confusing, but its widely
deployed. The speed difference can be substantial, so symmetric keys are commonly used to
encrypt all types of messages and other data; then, the symmetric key is secured by a longer,
stronger, asymmetric key. This technique reduces the performance penalty for the stronger
encryption to a minimum.

Designing Windows Server-Level Security | 113

WARNING Dont forget the


effect of encryption on write operations. Even with symmetric keys,
the overhead of adding encryption
can substantially reduce a busy OnLine Transaction Processing (OLTP)
systems ability to keep up with
inserts and updates.

Another consideration for encrypted data is that once the data in a column is encrypted, it
isnt available to the query processor for use in indexes, joins, sorting operations, grouping
operations, and filtering. This can seriously impact your databases ability to perform efficient
queries. A database that was completely encrypted would be equivalent to a database with no
indexes; it would incur a substantial performance penalty in reading data due to the decryption operations.
If the overhead of the encryption and decryption operations isnt enough of a penalty,
encrypted data also can affect performance in another way: Encrypted data grows in size (often
substantially) and causes fewer rows to be returned with each page in addition to requiring
more disk space for storage and backups. The growth in size is given the following formula:
Size = (FLOOR (8 + D) / BLOCK) + 1) * (BLOCK + BLOCK + 16)

LAB EXERCISE

Perform the exercise in your lab


manual.

D is the original data size, and BLOCK is the bit size of the cipher. RC2, DES, Triple DES,
and DESX are 8-bit ciphers, and the rest are 16-bit ciphers. In Exercise 5.3, youll see how
dramatically a piece of data can grow with encryption applied.
A 20-character data element would require 35 bytes57% more spaceif encrypted. For
larger columns, that difference is less, but for small columns, this may require substantial
schema changes.
Because the addition of encryption can affect your server in a number of ways, you should
consider each of these implications when deciding how widely youll deploy this feature in
your applications. In addition to the performance implications, dont forget that the additional overhead for data size can also affect your schema, forcing changes not only in tables,
but also in related code in stored procedures, functions, assemblies, and other parts of your
application.
The next section will look at a few ways you need to use this information in deciding what
amount of encryption to use and how to deploy it.

Developing an Encryption Policy


Now that you know the details of encryption, you must determine what type of policy
makes sense for your enterprise. The security of financial or medical information may
necessitate encryption because of regulatory requirements. The entries in a Web site
guestbook for your local photography club may not need any type of encryptionor
much security of any kind.

REF

Overall security policy is


discussed in Lesson 4.

TAKE NOTE

The policies you develop for encryption will partly be driven by your enterprises overall security policy. The need for encryption to be deployed will be dictated by that policy, whereas
many of the technical details must be determined separately.
When you determine that data from a particular table requires encryption, be sure you limit
the encryption to only those columns that really need it. The time required to encrypt and
decrypt each column of data affects the performance of your server by slowing it down, and
encryption also requires additional disk space. Avoid including primary keys and foreign keys
(columns used for sorting or grouping operations) as encrypted columns, because you pay
a severe performance penalty for any queries that need to perform these operations on
encrypted columns.

If you need to encrypt a column that is functioning as a primary or foreign key, you
should derive a surrogate key and use that instead as the primary or foreign key.

114 | Lesson 5

Managing Keys
You shouldnt choose to implement encryption without thinking about how doing so will
affect your enterprise. Often, DBAs dont have access to the keys, which limits their ability
to help with queries and check data integrity. The key management scheme is often a complex
and difficult undertaking because the security of these keys is critical to ensuring that
unauthorized individuals cant access your data. Deciding which keys are available to which
individuals and how to store these keys is critical to a well-designed encryption scheme.

If your key management scheme is compromised, then you may need to change your keys.
If youre changing the asymmetric keys used to encrypt symmetric keys, this is a simple
processespecially with certificates, which can be revoked. However, if you need to change
the symmetric keys that encrypt your data, you must decrypt the data and then re-encrypt
it with a new key. This endeavor can be resource intensive and time consuming if you have
a large amount of data. This is another reason to limit the amount of encryption that you
choose to deploy in your database.

Choosing Keys
Its recommended that you use a hybrid key scheme, with asymmetric encryption used
to secure the symmetric keys that encrypt the data. The symmetric keys should use the
strongest algorithm you can afford to deploy. This is determined by testing and weighing the performance implications of each algorithm along with any specific requirements
you have from regulatory bodies. For example, your company may be required by law to
implement DES encryption even though RC4_128 might be more secure.
The asymmetric encryption that you choose should follow the same guidelines. Longer keys
containing more bits are preferable to shorter ones, but there can be a severe performance
penalty. Test the keys under load to ensure that your server can handle the additional processing requirements. Certificates can be used instead of plain asymmetric keys, but be sure you
have a mechanism for ensuring their update, revocation, and replacement as necessary. The
details of developing such a policy are beyond the scope of this textbook, but more details are
available in the Windows Server Resource kits and from certificate vendors.
The overall encryption of the server also requires that the service master key and the database
master keys be protected. SQL Server handles the service master key internally as long as
you use the Configuration Manager to change service accounts. The database master keys are
needed whenever you restore a database to a new server. This often occurs in disaster-recovery
scenarios and development areas, so your policy should address how to protect the keys in
these situations as well as make them available when needed.
Finally, SQL Server lets you implement user-level encryption using a passphrase instead of a
key. This mechanism can be tempting, but be carefulforgetting the passphrase means that
data is lost. Avoid this encryption mechanism in your policy if possible.

Extensible Key Management


SQL Server 2008 includes a new feature known as Extensible Key Management (EKM).
This is a method of providing for encryption using software and usually hardware such
as smart cards or USB devices provided by third-party entities. With EKM, encryption can be established using physical hardware known as a Hardware Security Module
(HSM). This can be a more secure solution because the encryption keys do not reside
with encrypted data in the database. Instead, the keys are stored on the hardware device.
Extensible Key Management with an external hardware device can provide the following
benefits depending on your security design:

Designing Windows Server-Level Security | 115

EKM is implemented by registering the service in SQL Server 2008. This involves creating a
cryptographic provider object at the server instance level and specifying a dll file containing
the software to implement the encryption. The dll file should be supplied by the EKM
product vendor. The cryptographic provider object can then be used in creating both
symmetric and asymmetric keys. The keys are then used like other keys to encrypt or decrypt
objects in SQL Server. EKM can also be used with Transparent Data Encryption (TDE) to
encrypt entire databases although only asymmetric keys can be used for TDE.

REF

Transparent Data
Encryption is explained
further in Lesson 6.

TAKE NOTE

Additional authorization check providing further separation of duties


Faster performance using hardware-based encryption/decryption
External encryption key generation
External encryption key storage via physical separation of data and keys
Encryption key retrieval
External encryption key retention
Easier encryption key recovery
Manageable and securable encryption key distribution
Secure encryption key disposal

Extensible Key Management is only available in the Enterprise, Developer, and Evaluation
editions of SQL Server 2008.

Introducing SQL Server Service Accounts

THE BOTTOM LINE

SQL Server services process database actions. Each service must be started by an authorized
user. Each service must have an owner with appropriate rights and permissions to perform
the tasks assigned. Use SQL Server Configuration Manager to make any changes to SQL
Server services.
SQL Server and its associated parts are software programs that are similar to any other programs
on your computer. A user must log in and start each of these programs in order for them to run.
For a server program such as SQL Server, you dont want to have to log on and start it manually
each time the server is startedespecially if its restarted in the middle of the night!
In order for SQL Server to start automatically, a user account must log on to the host Windows
server and then start running the application. In Windows, this overall scheme represents a service,
and the service account is the account that logs on to the Windows server and starts the program.

Understanding the SQL Server Services


SQL Server offers a number of services that are part of the database server, each of
which can have its own service account. In all, 10 services can be installed with SQL
Server, as listed in Table 5-2. They arent all installed by default; you must select them
for installation.
TAKE NOTE

The instance name


includes the name of the
local computer.

Each of these services can have a service account that logs on to start the service and whose
context is used when performing actions for the service. For the service account to have
enough rights for the service to run under its context, each is added to a group created by
the installation program. The groups for each service are shown in Table 5-2 as well. The
InstanceName part of the group name is replaced by the name of that particular instance.
When the SQL Configuration Manager changes a service account, the new account is placed
in the appropriate group for the rights needed to run that service. The former user is also
removed from the group. This ensures that the appropriate rights are granted for each account
as required in accordance with the principle of granting the least rights needed.

116 | Lesson 5
Table 5-2
SQL Server services

S ERVICES

U SER G ROUP

I NSTANCE A WARE ?

SQL Server

Default: 2005: SQLServer2005MSSQLUser$ComputerName $MSSQLSERVER


Default: 2008: SQLServerMSSQLUser$ComputerName $MSSQLSERVER
Named: 2005: SQLServer2005MSSQLUser$ComputerName $InstanceName
Named: 2008: SQLServerMSSQLUser$ComputerName$InstanceName

Yes

SQL Server Agent

Default: 2005: SQLServer2005SQLAgentUser$ComputerName$MSSQLSERVER


Default: 2008: SQLServerSQLAgentUser$ComputerName$MSSQLSERVER
Named: 2005: SQLServer2005SQLAgentUser$ComputerName$InstanceName
Named: 2008: SQLServerSQLAgentUser$ComputerName$InstanceName

Yes

Analysis Services

Default: 2005: SQLServer2005MSOLAPUser$ComputerName$MSSQLSERVER


Default: 2008: SQLServerMSOLAPUser$ComputerName$MSSQLSERVER
Named: 2005: SQLServer2005MSOLAPUser$ComputerName$InstanceName
Named: 2008: SQLServerMSOLAPUser$ComputerName$InstanceName

Yes

Reporting
Services

Default: 2005: SQLServer2005ReportServerUser$ComputerName


$MSSQLSERVER and SQLServer2005 ReportingServices
WebServiceUser$ComputerName$MSSQLSERVER
Default: 2008: SQLServerReportServerUser$ComputerName$MSRS10
.MSSQLSERVER
Named: 2005: SQLServer2005ReportServerUser$ComputerName$Instance
Name and SQLServer2005ReportingServicesWebServiceUser
$ComputerName$InstanceName
Named: 2008: SQLServerReportServerUser$ComputerName$MSRS10
.InstanceName

Yes

Notification
Services

Default or Named: 2005: SQLServer2005NotificationServicesUser


$ComputerName

No

Integration
Services

Default or Named: 2005: SQLServer2005DTSUser$ComputerName


Default or Named: 2008: SQLServerDTSUser$ComputerName

No

FullText Search

Default: 2005:SQLServer2005MSFTEUser$ComputerName$MSSQLSERVER
Default: 2008: SQLServerFDHostUser$ComputerName$MSSQL10
.MSSQLSERVER
Named: 2005: SQLServer2005MSFTEUser$ComputerName$InstanceName
Named: 2008: SQLServerFDHostUser$ComputerName$MSSQL10.InstanceName

Yes

SQL Server
Browser

Default or Named: 2005: SQLServer2005SQLBrowserUser$ComputerName


Default or Named: 2008: SQLServerSQLBrowserUser$ComputerName

No

SQL Server Active


Directory Helper

Default or Named: 2005: SQLServer2005MSSQLServerADHelperUser


$ComputerName
Default or Named: 2008: SQLServerMSSQLServerADHelperUser$ComputerName

No

SQL Writer

N/A

No

TAKE NOTE

If you choose to manually change the service account using the Services applet in the Control
Panel or the Manage Computer MMC snap-in, make sure the new account is placed in the
appropriate group. It isnt recommended that you assign individual rights to each user account.
Each of these services is classified as either instance-aware or instance-unaware. If a service is
instance-aware, then separate copies of its executables and supporting files are installed with
each new instance, and its able to run independently of other instances. Services that are

Designing Windows Server-Level Security | 117

LAB EXERCISE

Perform the exercise in your lab


manual.

instance-unaware are installed only once on each Windows host and serve all instances on
that host. (Table 5-2 gives the classification of each service.)
In Exercise 5.4 Part A, youll explore service account groups.

Choosing a Service Account


Each particular service must have an account under which it can run, but you have a
number of choices when choosing an account. When you install SQL Server, youre
given the option to allow all services to run under the same account. Alternatively, you
can specify each service to use separate accounts. This section examines the default and
optional accounts before looking at why you should choose a particular account.

The default choice is to specify a Domain User account under which the SQL Server service
will run. This approach is recommended to ensure that the least amount of privileges is granted
and that you control which permissions this account has. The account you choose should be an
account created specifically for this service and not an account that an actual person will use.
Its also not recommended that you share accounts for different services or servers.
Most Windows operating systems include three built-in accounts under which you can run
the SQL Server services: the Local System, Network Service, and Local Service accounts.
They differ in the following ways:
Local System. This is a highly privileged account that can access most resources on the
local computer. It isnt recommended that you use this account.
Network Service. This account has the same level of access as the Users group on the
local computer. When it accesses network resources, it does so under the context of the
local computer account.
Local Service. This is a built-in account that has the privileges of the local Users group
on the computer. This is the best choice for SQL Server services if you must use a
built-in account. Network resources are accessed with no credentials and a null session.
All other services have their own default and optional accounts, as shown in Table 5-3.
Table 5-3
Service account defaults for SQL Server services

S ERVICE

D EFAULT A CCOUNT

O PTIONAL A CCOUNTS

SQL Server

Domain User

Local System, Network Service

SQL Server Agent

Domain User

Local System, Network Service

Analysis Server

Domain User

Local System, Network Service, Local Service

Report Server

Domain User

Local System, Network Service, Local Service

Notification Services

N/A

N/A

Integration Services

For Windows Server 2003 and 2008:


Network Service
For Windows 2000 Server: Local System

Domain User, Local System, Local Service

FullText Search

Same as SQL Server

Domain User, Local System, Network Service, Local Service

SQL Server Browser

Domain User

Domain User, Local System, Network Service, Local Service

SQL Server Active


Directory Helper

Network Service

Local System

SQL Writer

Local System

N/A

118 | Lesson 5

TAKE NOTE

SQL Server doesnt


configure Notification
Services. An administrator must do this using
an XML file.

When deciding on a service account, you must consider how the service will be used and
under what security parameters your server will run. The recommendations are general guidelines and should be followed unless your environment dictates particular reasons to deviate
from them.
The following sections address the types of account along with the types of situations when
you should use them and reasons for using each.

Choosing a Domain User

WARNING The Express

editions of SQL Server use different accounts on some platforms.


Consult Books Online for the variances for SQL Server 2005 Express
edition.

The general guideline for most services is to use a Domain User account. If your domain
administrators follow the guidelines for granting privileges to domain accounts, the
EVERYONE and other global groups wont have any privileges. The Domain User account
that is used will be granted the appropriate privileges by the SQL Server setup program on
the local machine, both to run as a service and to access files on the local machine.

Additional privileges may be required in the following types of situations and can be granted
to a Domain User account as needed:

The need to access a drive (local or network) to read and/or write files.
The need for heterogeneous queries accessing another data source.
The need to work with replication.
The need to use mail services. For Microsoft Exchange, this is necessary, but other mail
systems (like those used by Database Mail or SQL Agent) may require a Domain User
account as well.

If you need to grant additional rights to this account, its recommended that you create new
groups and grant the rights to those groups with this Domain User included in those groups.
The groups can be local groups for this Windows host or domain-level groups that provide
access to remote machines.

TAKE NOTE

Even though this section specifies Domain User, it could be a local user account on the
local Windows host for a standalone server.

Choosing a Local Service

TAKE NOTE

A full list of permissions


granted is in Books
Online under Setting
Up Windows Service
Accounts.

The Local Service built-in account is used for running services on the local machine
with a limited set of privileges. This account has the same rights and privileges as any
authenticated user, which it receives as part of the Users group. As with any well-secured
machine, you should grant few, if any, additional rights to this group.

This account is a good choice for services that dont access any resources outside of their own
services. Because SQL Server setup adds this account to the appropriate group (as shown
earlier in Table 5-2), it receives access to certain folders in order to run properly, but it has no
rights outside of those minimal permissions.
If you have a service that requires access to additional folders beyond those permissions
granted by SQL Server, its recommended that you choose a Domain User account.

Designing Windows Server-Level Security | 119

CERTIFICATION READY?
Ensure that you
understand the effects of
using an Active Directory
user versus a local user
account.

WARNING Microsoft highly

recommends that you dont use


this account for SQL Server or SQL
Server Agent.

Choosing a Network Service


The Network Service built-in account has the same rights as members of the Users group
on the local Windows machine, but it also has the ability to access network resources
using the computers domain account. This account provides limited credentials, but it
allows some level of network access.

If youre performing anonymous file transfers, this account works for Integration Services; but
if you must copy files to or from secured folders on your domain, a Domain User account is
recommended. This account works well for the Active Directory Helper, and its recommended that you leave this account set for that service.

Choosing a Local System


The Local System account is the most powerful account on the local machine. Its
equivalent to an Administrator, but it has additional system-level access that even the
Administrator account doesnt have. Because this is considered a privileged account, it
isnt recommended that SQL Server or any of its services run under this account.

Case Study: Planning for Services


Prior to SQL Server 2005, most computers had just two services running: SQL Server
and SQL Server Agent. For many servers that didnt require access to network resources
(such as mail), most users ran their servers under either the Local System or the
Administrator account. This usually occurred because nobody planned the SQL Server
installation prior to installing the software, and, when faced with the need to choose a
service account, they made one of the two previous choices.
Both are poor choices because they grant rights to the server that arent needed, and any
security breach could result in more problems than just a loss of data.
With SQL Server 2005 and its 10 potential services, its strongly recommended that you
plan your installation and create user accounts for each of your services prior to installing
the SQL Server 2005 software.

The general guideline is to create a user account for all instance-aware services and
Integration Services. In the next section, youll examine how you change to the user account
youve created.

Changing Service Accounts


As mentioned, if you need to change the service account, you should use the SQL Server
Configuration Manager to ensure that the proper rights and settings are granted to the
new account. This program is installed with SQL Server and is located in the SQL Server
2005 group under Programs, in the Configuration Tools section.

120 | Lesson 5

When you start this tool, it has three main sections, as shown in Figure 5-3: SQL Server 2005
Services, SQL Server 2005 Network Configuration, and SQL Native Client Configuration.
This section will examine only the first section.
Figure 5-3
Using the SQL Server
Configuration Manager

As shown in Figure 5-3, the SQL Server Services section shows the services that are running
on one particular server. In this case, five services are running on this server, with one named
instance:
SQL Server Integration Services. As mentioned, there is only one copy of this service
for each Windows host. Additional instances share this one service.
SQL Server FullText Search. Each instance that has this enabled will have one service
listed here.
SQL Server. This is the main database engine, shown here as an instance named SS2K5.
SQL Server Agent. This is another instance-aware service. In this case, the service is
running for instance SS2K5.
SQL Server Browser. This is one of the instance-unaware services, meaning this is the
only copy of this service regardless of how many instances are installed.
To the right of the service name, a number of pieces of information appear, as shown in
Figure 5-4. These include the status, start mode, service account, process ID, and type. The
status and start mode will be discussed later in this Lesson in the Working with Services
section. The main item youre concerned with here is the service account. In this example,
three different service accounts are in use.
Figure 5-4
Viewing SQL Server services
in Configuration Manager

The default Network Service account is running Integration Services, and SQL Server Agent
for this instance has its own account. SQL Server, FullText Search, and SQL Server Browser
all share the same service account, which isnt recommended.
You can change the service accounts using the Services Control Manager in Control Panel
or Manage Computer, but it isnt recommended that you do this for SQL Server. The
Configuration Manager is specifically designed to ensure that the proper permissions are set
up for any service accounts. This includes the file-level permissions for accessing files and
folders as well as the necessary service-level rights. Table 5-4 shows the service rights needed
for each service.

Designing Windows Server-Level Security | 121


Table 5-4
Service account service rights
needed

WARNING Choose a strong


Warning!

password for this user account.


It can always be changed and
the service restarted with a new
password. For services, you dont
want a weak password that
someone can guess. This is a user
account, so it can be used to log
on to your network.

TAKE NOTE

S ERVICE

S ERVICE R IGHT

SQL Server

Log on as a service, Act as part of the operating system (Windows


2000), Log on as a batch job, Replace a process-level token,
Bypass traverse checking, Adjust memory quotas for a process,
Permission to start SQL Server Active Directory Helper, Permission
to Start SQL Writer

SQL Server Agent

Log on as a service, Act as part of the operating system


(Windows 2003), Log on as a batch job, Replace a process-level
token, Bypass traverse checking, Adjust memory quotas for a
process

Analysis Server

Log on as a service

Report Server

Log on as a service

Integration Services

Log on as a service, Permission to write to application event log,


Bypass traverse checking, Create global objects, Impersonate a
client after authentication

Notification Services

N/A

FullText Search

Log on as a service

SQL Server Browser

Log on as a service

SQL Server Active


Directory Helper

None

SQL Writer

None

If you manually change the service account, you can easily forget to grant a permission, which
may result in the service not running or not running properly. You may also inadvertently
grant too many rights to the service, which can result in a poorly secured environment.
Because the Configuration Manager is as easy to use as any other tool, you should only use
this tool to change service accounts.
Before you use this tool, however, be sure youve already set up the appropriate user accounts
on your local computer or on the domain. This tool doesnt allow you to create a new
account, only select an existing account.

Be sure you dont select the User Must Change Password at Next Login check box on the
new account. The service has no way of doing this and wont start. The recommendation
is that you also select the Password Never Expires check box to ensure that this service
doesnt stop unexpectedly. This doesnt mean the password should never be changed, only
that it should be manually changed, not forced.

LAB EXERCISE

Perform the exercise in your lab


manual.

In Exercise 5.4 Part B, youll change the service account for the FullText Search service.

122 | Lesson 5

Setting Up Antivirus Software

THE BOTTOM LINE

Many antivirus software vendors supply two licenses per user so the home office computer
can also be protected against contaminating the enterprise server. In both cases, though,
antivirus software is reactionaryit cant be updated until after an attack has been detected
somewhere in the world. If you happen to have the first server attacked, even your diligent
efforts wont helpyour systems can still be infected. Plan a control strategy should the
unwanted occur.

Many SQL Server instances run unattended and provide a network service to clients, with the
Windows operating system providing a host for SQL Server. In these cases, antivirus software
shouldnt be necessary. However, in some cases Windows provides other software services such
as file serving, e-mail, or some other process, and antivirus software is warranted.
There is no reason SQL Server and an antivirus software application cant exist together, but
you must appropriately configure the antivirus software. For most applications, the default
configuration will cause the SQL Server to perform poorly.

TAKE NOTE

Some companies require antivirus software on all machines. It isnt worth arguing about
this necessity on a dedicated SQL Server. Instead, work with the network administrators to
properly configure the software.

An antivirus program works by hooking into the disk access drivers and validating every
attempt to write to a file. In this way, it prevents a malicious program from altering a file and
writing a virus into the file that will execute or propagate when the file is accessed.
SQL Server requires file accesses whenever it performs an insert, update, delete, or other operation that changes data, which means the antivirus program by default scans the data and log
files for each operation. Because data and log files are often megabytes, gigabytes, or even larger
in size, this can cause the server to halt while the antivirus software completes its scan.
For this reason, its highly recommended that you configure your antivirus software to exclude
the following files:
TAKE NOTE

Some environments
should exclude specific
files only and not whole
directories. Although
this is possible, it may
cause issues with backup
files, which often have
a unique name for each
backup. Try working
with your network
group for an exception
in this case.

Database data files. .mdf and .ndf files.


Database log files. .ldf files.
Backup files. Usually .trn, .dif, and .bak files, but whatever backup extensions you use
should be excluded.
In addition, you may want to exclude files in the following scenarios:
Quorum drives. In a clustered situation, you should exclude the quorum drives completely.
Replication. You may want to secure tightly access to any folders where temporary replication files are written, and exclude them from scans.
SQL Server log files. Only SQL Server should write to these files. They can grow quite
large, so exclusion prevents any slowdowns on your server.
Log shipping files. You should especially exclude these files on the standby server.
Its possible that other files will cause issues with your SQL Server installation. After youve
installed both SQL Server and your antivirus software, examine the logs of the antivirus
software and be sure you arent scanning files that conflict with the operation of your
database server.

Designing Windows Server-Level Security | 123

Working with Services


THE BOTTOM LINE

Minimize the number of enabled services to minimize unneeded overhead.


Earlier in this Lesson, you learned how to manage service accounts for SQL Server to control
different parts of the instance. However, you can enable many more services on the instance separate from those examined earlier. Each of these services has a security implication and should
be disabled unless needed. As part of adherence to the Trustworthy Computing Initiative, SQL
Server is installed in a secure by default mode, and most of these services are disabled. This
section will explain how to enable these services as well as the security impact of each.
When you install SQL Server and whichever options are required for your installation, only
certain services are set to automatically start up when the computer boots. Table 5-5 shows
the components of SQL Server and the startup state of each after setup completes.

Table 5-5
Services default mode

S ERVICE

D EFAULT M ODE

SQL Server

Started

SQL Server Agent

Stopped

Analysis Services

Started

Integration Services

Started

Report Server

Started

Notification Services

N/A

FullText Search

Stopped

SQL Server Browser

Stopped

SQL Server Active Directory Helper

Stopped

SQL Writer

Stopped

AFTER

S ETUP

Not all of these components are installed by default, and they may not be present on your
systems. If you choose to install them, however, the mode listed in Table 5-5 is the mode they
will be in unless you selected the autostart options in the setup program.
Each services mode should remain stopped unless youre using the service on this server. For
example, the FullText Search service may be installed, and you may plan on using it, but until
you create a full-text index and require its update, dont start this service. If youve installed
services but arent using them, set them to disabled until such time as you have an application
that requires them.
Although you can use the Service Control Manager in the Control Panel for the Windows
host to change modes, its recommended that you use the SQL Server Configuration Manager
for all service changes relating to SQL Server. Regardless of the mode a service is set to, the
administrator can change this mode to one of the following three modes:
WARNING Dont install all
components on servers by default.
Make sure you need Integration
Services, Report Server, or any other
service before installing it. Its a
tenet of the Trustworthy Computing
Initiative that installations should
be secure by default; installing all
components as policy violates this.

Automatic. In this mode, the service account attempts to log on and start the service
when the Windows server boots.
Manual. In this mode, the service account doesnt log on and start the service when
Windows starts; but a start message can be issued to the service, and it will attempt to start.
Disabled. In this mode, the service cant be started with a start message. The administrator of the Windows host needs to change the service to Automatic or Manual.

124 | Lesson 5

LAB EXERCISE

Perform the exercise in your lab


manual.

Most services that are being actively used, such as the database server, analysis server, and so
on, should be set to Automatic so theyre available any time the server is running. Services
that you use rarely can be set to Manual or Disabled to prevent them from starting when the
server boots. If you no longer need a particular service on one of your servers, you should set
it to Manual or Disabled and stop it.
In Exercise 5.5, youll disable a service.

Configuring Server Firewalls


THE BOTTOM LINE

Most firewalls in corporate environments have dedicated hardware devices that function as
highly configurable routers with rules specifying the security rules for traffic passing through
them. This must be configured according to your business needs.
However, as the number and variety of threats have proliferated, many operating systems have
started to integrate and run software firewalls alongside other services. Most of the platforms
on which SQL Server runs include a software firewall that needs to be configured to allow
SQL Server to access and be accessed from clients and other servers.
If you have a firewall enabled, then you must make sure the ports used for SQL Server are
open for communication with those clients that need it. For the default instance, this usually
means that ports 1433 and 1434 are open, but named instances choose a port on startup by
default. In order to secure these instances with your firewall, you need to use the SQL Server
Configuration Manager to assign a specific port to these instances for communication with
their clients.
Table 5-6 lists the various services and the ports that they require for communication. For
named instances of these services, you can specify specific ports to be used.

Table 5-6
TCP port numbers used by
services

TAKE NOTE

Be sure you also close


ports if services are no
longer being used.

S ERVICE

P ORT N UMBER

SQL Server default instance

1433, 1434 (UDP)

SQL Server named instance

Chosen at startup

Integration Services

DCOM ports (consult Windows OS documentation), 135

Analysis Services default instance

2383

Analysis Services named instance

Chosen at startup

Analysis Services Browser

2382

Named pipes connections

445

Report Server (through IIS)

80

Endpoints

Specified endpoint TCP port used in endpoint setup

Like the services installed on your server, the ports used by the services shouldnt be opened
on a firewall unless theyre being used. Keep open the minimum number of ports required
for the server to meet your needs. For example, if Integration Services is accessed only on the
local server and not across the network, then dont open these ports.

Designing Windows Server-Level Security | 125

Physically Securing Your Servers

CERTIFICATION READY?
When examining
security, be sure you
grasp the breadth and
depth of this topic. Do
you understand how
authentications, physical
barriers, firewalls,
disaster recovery plans,
business recovery plans,
risk analyses, policies,
enforcement, incident
response plans, and
forensic investigations all
interact?

Every server that you have running in your enterprise should be physically secured from
unauthorized access. There are many ways of enforcing security and protecting your
server through software, but most of these can be circumvented if the server can be
physically accessed or attacked. The local file system security can be bypassed if someone
can boot a server from another source, and this can lead to security-related files or data
files being copied and the data compromised.
SQL Servers are no exception. But because they can be easily set up on many platforms and
are used in testing new solutions, sometimes the servers physical security isnt maintained as
theyre moved to an employees office or cubicle.
If youre storing enterprise data on a SQL Server, the server should be stored in a physically
secure manner. This means behind a locked door with a limited number of people able to
access the machine. Access controls that log and control which individuals can access the
room are preferred; theyre even mandated in some environments.
SQL Servers often have large disk subsystems, so be sure the disks are secured to prevent their
physical theft. Due to the large data sets, tape backup systems are often used. Make sure physical control over these tapes is maintained and they arent allowed to sit on a desk or other
unsecured area where unauthorized people have access to them.

S K I L L S U M M A RY
This Lesson has investigated how to design Windows server-level security. The server-level
policies provide the highest level of security for SQL Server. Your password and encryption
policies should provide the level of security you need, balanced with the performance required
on your server. The services, service account, and firewall policies should be set to the
absolute minimums required for each server. Enabling all services or opening all possible ports
increases the surface area available for attack on your server unnecessarily. Configure and
make available those items only when you need them, and disable them when theyre no
longer needed.
Security is an ongoing process and should evolve as your server changes. Developing policies
and procedures that make the least amount of resources available from a security perspective
will help to ensure that youre protected and that your server functions in an optimum
manner at all times.
For the certification examination:

Understand the SQL Server password policy. You should know the options for password
policies in SQL Server and the impact of each one.

Understand the different SQL Server encryption options. You should know how encryption
is configured at the server level in SQL Server.

Know how to properly configure a service account. SQL Server has different sections that
require service accounts, and you need to know how they should be configured.

Understand how antivirus software interacts with SQL Server. You should be able to
configure antivirus software to coexist with a SQL Server instance.

Know how to enable and disable services. SQL Server consists of multiple services, and
you should understand how and why to enable or disable them.

Understand how server-level firewalls interact with SQL Server. A server-level firewall is a
software service that runs alongside a SQL Server instance. Understand how these interact
and how they should be configured.

126 | Lesson 5

Knowledge Assessment
Case Study
The Ever-Growing Wealth Company
The Ever-Growing Wealth Company manages retirement funds for many people
and is concerned about the security of its data. To ensure that its database servers are
adequately protected, the company decides to review and revamp its security policies.

Planned Changes
The companys management thinks the security policies for its applications must be
strengthened and that encryption needs to be deployed. However, these changes cant
cause problems in the event that disaster-recovery plans must be implemented.

Existing Data Environment


The company currently has two SQL Servers that separately support two different
applications. A third SQL Server receives copies of all backups immediately after theyre
completed and is available in the event of a disaster. One of these, called SQLWeb,
supports the company Web site on the Internet. The other, SQLTrading, supports the
portfolio management and trading application.
SSIS is expected to be used to move some data between these two servers.

Existing Infrastructure
All these servers are stored in the companys data center, which is a climate-controlled,
converted office in the companys current location. The company would like to move all
its servers to a co-location facility with a dedicated network connection back to the office.
Currently, a tegwc.com domain contains two main organizational units (OU), one for
the internal employees and one for any client accounts.
The two SQL Servers are named instances that use dynamic ports. A firewall protects
the entire network, but all servers exist in a flat Ethernet topology as shown in the Case
Exhibit of this case study.

Business Requirements
The clients of Ever-Growing Wealth expect to be able to access their data at any time
of the day or night. The existing disaster-recovery plan allows system administrators a
five-minute response time to failover the SQL Servers, and this is deemed acceptable.
However, it cant take more time than this to get the application running.
The company expects that regulatory requirements will be enacted soon for all financial
companies, so the strongest encryption possible is preferred, balancing the performance
of the servers. Newer hardware is available to make up for any issues from the implementation of encryption.

Technical Requirements
For the new servers, the company purchased the next generation of hardware to allow
for the additional load of encrypting data. However, complete encryption of all data
using asymmetric keys will likely overload these servers; therefore, the security policy
must work within these hardware constraints.
Each instance has a SQL Server Agent service that performs various functions,
including copying backup files to another server and running business maintenance
jobs that access the mail server.

Designing Windows Server-Level Security | 127

The existing named instance configuration cant be changed because its mandated by
the disaster-recovery plan.
Network firewalls are set up to protect the internal network, but it has been decided to
also use the built-in Windows firewalls.
The existing applications use SQL Server logins from clients to access data. This
structure cant be changed, but better security can be built into the application to take
advantage of SQL Servers capabilities.
Case Exhibit

Internal File Server


Internal PCs

Internet

Ethernet
Firewall

Web Server

WebSQL

TradingSQL

Multiple Choice
Circle the letter or letters that correspond to the best answer or answers.
Use the information in the previous case study to answer the following questions.
1. The default Windows 2003 password policy has not been changed. Which of the
following passwords would be acceptable for a SQL Server login named BillyBob?
a. Kendall01
b. KityK@t
c. BillyBob2$
d. Barnyard
2. The company will continue to use SQL Server logins for its applications, but it will
reissue passwords to all its clients. Which of the following password options should you
check to meet the business requirements? (Choose as many as needed.)
a. Enforce Password Policy
b. Enforce Password Expiration
c. User Must Change Password at Next Login
d. Require Complex Password
3. You are planning to change the service accounts for the SQL Server 2005 database
instances. To ensure that you meet the business requirements for disaster recovery, which
account should you choose for each SQL Server Agent named instance?
a. Local System
b. Local Service
c. A Domain User
d. Network Service

128 | Lesson 5

4. After installing your SQL Server 2005 server, it appears to be running very slowly.
Investigation reveals that the mandatory antivirus software is scanning your database
files. What should you do?
a. Remove the antivirus software.
b. Disable the antivirus software.
c. Stop the software from scanning the drive where the SQL Server executables are
located.
d. Stop the software from scanning the data and log files.
5. Because the clients for the Ever-Growing Wealth Company renew their contracts for
services annually, you want those clients who do not renew their contracts to have their
access revoked automatically. What type of encryption supports this?
a. Use a DES key to encrypt data, and require it to change every year.
b. Use certificates issued to each client that the application will use to authenticate
users.
c. Use an asymmetric key that you generate and send to clients to install on their
computer with the application.
d. Set a password age of 365 days, and force clients to change their password through
the application when it expires.
6. Based on the information in the case study, how many services will be running on each
active SQL Server instance?
a. 1
b. 3
c. 5
d. 10
7. Which type of encryption is recommended for sensitive data on each server?
a. Shared DES symmetric keys used to secure DES keys that encrypt data
b. Shared Triple DES symmetric keys used to secure DES keys that encrypt data
c. RSA 1024 keys used to secure AES_256 keys that encrypt data
d. Certificates used to secure AES_256 keys that encrypt data
8. How many instances of Integration Services need to be installed on the spare SQL Server
2005 server for disaster recovery if both the WebSQL and TradingSQL servers could
failover at the same time?
a. Zero, and another SQL Server 2005 server is needed
b. One for both instances
c. Twoone for each instance
9. A certificate is which type of security mechanism?
a. Symmetric
b. Asymmetric
10. In developing a strong security infrastructure, you decide to install firewalls to protect
the internal network from the servers as well as an additional firewall that segregates the
external web server and WebSQL SQL Server 2005 server from the other servers. What
other actions should you take? (Choose as many options as needed.)
a. Configure each instance to use a specific TCP/IP port for communicating with
clients.
b. Configure each servers firewall to allow port 1433 (TCP) through.
c. Configure each SQL Server servers firewall to allow a specific TCP port through to
each SQL Server server depending on each database servers TCP/IP configuration.
d. Configure each named instance to use port 1433 (TCP).

Designing SQL Server


Service-Level and
Database-Level
Security

L ESSON

L E S S O N S K I L L M AT R I X
TECHNOLOGY SKILL

70-443 EXAM OBJECTIVE

Design SQL Server service-level security.

Foundational

Specify logins.

Foundational

Select SQL Server server roles for logins.

Foundational

Specify a SQL Server service authentication mode.

Foundational

Design a secure HTTP endpoint strategy.

Foundational

Design a secure job role strategy for the SQL Server Agent Service.

Foundational

Specify a policy for .NET assemblies.

Foundational

Design database-level security.

Foundational

Specify database users.

Foundational

Design schema containers for database objects.

Foundational

Specify database roles.

Foundational

Define encryption policies.

Foundational

Design DDL triggers.

Foundational

KEY TERMS
data definition language
(DDL): A subset of T-SQL
commands that create, alter, and
delete structural objects such as
tables, users, and indexes in SQL
Server.
data manipulation language
(DML): A subset of T-SQL
commands that manipulate data
within objects in SQL Server.
These are the regular T-SQL

commands such as INSERT,


UPDATE, and DELETE.
role: A SQL Server security account
that is a collection of other security
accounts that can be treated as
a single unit when managing
permissions. A role can contain
SQL Server logins, other roles, and
Windows logins or groups.
schema: Each schema is a
distinct namespace that exists

independently of the database


user who created it; a schema is
a container of objects. A schema
can be owned by any user and
its ownership is transferable.
scope: A division of SQL Servers
security architecture (principals,
permissions, and securables)
that places securables into
server-scope, database-scope,
and schema-scope divisions.

129

130 | Lesson 6

SQL Server runs on top of a Windows operating system host, but it is a full system
inside itself. There are a number of server-level security features that can be configured
and must be properly set up to ensure the entire database service is secure.
In the previous Lesson, you looked at securing SQL Server from the Windows level, the highest
level of security assignment. This Lesson examines many of the server-level SQL Server security
items that affect the entire database server, such as logins, server roles, endpoints, SQL Server
Agent, and .NET assemblies, as well as the high-level principals in the database. The next
Lesson will delve further into the server and examine the security of individual objects.

REF

Lesson 7 covers more


aspects of securables.

All the entities that can request resources in SQL Serverin other words, those logins, users,
or processes that can perform queries and make changes to the serverare known as principals. The securables are the resources that the principals can access. In SQL Server, there are
three levels of principals: the Windows level, the SQL Server level, and the database level.
This Lesson looks at logins (the first two levels) as well as users and roles (the third level).

Creating Logins
THE BOTTOM LINE

In order for a user of any sort, including some of the SQL Server components, to access
data or perform a job in the SQL Server, the user must log in to the server.
A login is required to gain access to resources, although the login itself doesnt grant the user
any rights other than the ability to connect to the server. A login is one of the principals in
SQL Server, an entity that can request resources from the server.

REF

Lesson 4 examined
Windows authentication versus SQL Server
authentication.

REF

Lesson 5 examined
how passwords are
treated for SQL Server
authenticated logins.

REF

See the Mapping


Database Users to Roles
section in this Lesson.

Just as a user must log in to Windows, either a local machine or a domain, a user must also
log in to SQL Server. Each SQL Server is separate, and they dont share logins, although SQL
Server has the provision to trust another entity with authentication and enable a user to carry
that authentication through to SQL Server. SQL Server uses two types of logins:
Windows-authenticated logins
SQL Serverauthenticated logins
There are two main differences between these logins from the SQL Server administrator point
of view, but the server treats them the same once the user logs in.
The first difference is that SQL Server trusts Windows-authenticated logins in that it
assumes the local machine security system or the Active Directory (AD) domain has authenticated the user. SQL Server accepts the token presented by Windows, and, if this user has
a matching login, no further authentication is performed. Windows authentication allows
a user or a group to be added to SQL Server, which lets an administrator take advantage of
existing Windows groups for security assignment in SQL Server.
The second difference is that SQL Server logins are individual users only; no groups are available
using this type of login. However, SQL Server can take advantage of some of the advantages of
Windows logins by enforcing password policy in some cases.
You can add both types of logins the same way; the only difference is that you must specify a
password for SQL Serverauthenticated logins. Creating a login, however, doesnt grant rights
to a particular database. That requires a user to be mapped to this login.
One special login cant be removed from the server: the SQL Server system administrator,
or sa, login. This login is built into SQL Server and is similar to the Administrator login on
Windows. The sa login is the highest-privileged login, is a member of the sysadmin server role,
and can perform any operation on the server. You can rename this user, but you cant drop
sa; you also cant remove sa from the sysadmin role. The sa login is disabled if SQL Server
Authentication isnt enabled; however, it can also be disabled manually by an administrator.

Designing SQL Server Service-Level and Database-Level Security | 131

TAKE NOTE

LAB EXERCISE

Perform the exercise in your lab


manual.

The sa user always is assigned SID0x01, regardless of the name. You can see this by querying
master.sys.syslogins.

Exercise 6.1 shows how to add a login; additional options for this login are detailed throughout
this Lesson.
As mentioned previously, creating a login doesnt grant any rights to the actual client other
than the ability to connect to the server. In fact, without a user mapping to the default database, the login will fail. This happens because the login process sets the initial context for the
user to the default database if one isnt specified, and after verifying the login, a user mapping
is required to establish the session.
Before examining the user mapping, the next section looks at the server roles available to
the login.

Granting Server Roles


THE BOTTOM LINE

A role in SQL Server is analogous to a group in Windows. You can grant certain rights to
a role and then add one or more users to the role to receive those rights. SQL Server has
three types of roles: server roles, application roles, and database roles.
This section examines server roles, and subsequent sections will discuss the other roles.
The server roles in SQL Server are fixed in that their rights are predetermined, and you cant
add, change, or delete the roles. Table 6-1 describes the available server roles.

Table 6-1
Server roles in SQL Server

CERTIFICATION READY?
Ensure that you know
the fixed server roles.
Expect at least one exam
question on this topic.

R OLE

D ESCRIPTION

sysadmin

This role grants its members rights to all functions on the server and
defaults to dbo as a user in each database.

serveradmin

This role can change the serverwide configurations of SQL Server and
initiate a shutdown.

setupadmin

This role can work with linked servers (add, configure, and remove).

securityadmin

This role works with logins (add, edit, and drop) as well as grant
server-level permissions (GRANT, REVOKE, and DENY) and works with
database-level permissions. This role can change SQL Server login
passwords.

processadmin

This role can terminate processes running on SQL Server.

dbcreator

This role can create, alter, or drop any database.

diskadmin

This role can manage the disk files on SQL Server.

bulkadmin

This role allows its users to execute the bulk-insert functions of


SQL Server

You can grant the fixed server roles to any login on your server. This includes any groups that
are added as Windows-authenticated logins. Using Windows groups allows you to manage
your security at the Windows level and ensures that you dont have a mismatch between the
Active Directory mappings for your employees and their capabilities in SQL Server.

132 | Lesson 6

Granting a server role to a login should follow the same principles discussed in Lesson 4 of
using the least privileges required for a particular function.
By default, the local Windows Administrators group (BUILTIN\Administrators) is added as
a login and placed in the sysadmin role. This usually means all your Active Directory domain
administrators are SQL Server system administrators by default. This violates the separation
of duties as a best practice and should be changed as soon as youve created a separate system
administrator login.

LAB EXERCISE

Perform the exercise in your lab


manual.

In Exercise 6.2, youll add two server roles to the Delaney login you created in Exercise 6.1.

Mapping Database Users to Roles

REF

See the Granting


Database Roles section
in this Lesson.
TAKE NOTE

The exception is if the


guest user exists. A login
can access a database
other than its default
using guest.

MORE INFORMATION

You learn of other roles later


in this Lesson in the Granting
Database Roles section.
LAB EXERCISE

Perform the exercise in your lab


manual.

A database user is a principal that is authorized to access a particular database and exists
in the database public role.
Every login must be mapped to a user to allow it to access a database, including the default
database.
One user, the guest user, is created by default in each database, and its assigned to any login that
doesnt already have a user mapped to it in a database and has the rights of the public role. This
means that by default a user can access any database on your server if the guest user exists in that
database. As a security precaution, you should remove the guest user from all your databases.
You grant users rights to individual objects, and you can include users in roles. The recommendation is that you shouldnt grant any rights to a user and should instead include users
in one or more roles to receive their permissions. This is similar to the recommendation for
rights not being granted to Windows users, only Windows groups, with those users included
in groups for permissions.
You can create users when a login is created or add users later and map them to existing
logins. Because a login requires a user mapping in the default database for the login to
succeed, at least one user is usually created when a login is created. Additional users can be
mapped at a later time. Exercise 6.3 shows how a new user can be created and mapped to the
Delaney login from Exercise 6.1.
Although logins must be mapped to users, the reverse isnt true: Users dont necessarily have
to be mapped to logins. When this is the case, the user may be in one of two states: orphaned
or mapped to an asymmetric key mechanism.
Orphaned users occur usually when a database is restored on a different server from the one on
which it was created. The mapped login may not exist or may have a different SID on the new
server. These users need to be remapped using the sp_change_users_login stored procedure,
which will remap the user to another login or create a corresponding login that is mapped to
the user. You can find more information about sp_change_users_login in Books Online.
When a user is mapped to an asymmetric key mechanism, it can be either an asymmetric key or
a certificate that exists in the database. This mapping takes place when the user is created and a
specific key that already exists is mapped to the user with the CREATE USER statement.
When a user is mapped to a key mechanism, it means that a connection is made using Service
Broker or another service that supports certificates or asymmetric keys. The login is made using
the certificate or key and then mapped through to the user who is mapped to that certificate. All
other security checks are then made on this mapped user just as with any other database user.

TAKE NOTE

Certificate-based authentication and key-based authentication arent available with client


connections from tools such as Management Studio and SQLCMD.
The next two sections will examine how you handle additional database security with schemas
and database roles.

Designing SQL Server Service-Level and Database-Level Security | 133

Securing Schemas
THE BOTTOM LINE

A schema is a way of dividing the objects in a database into a namespace, which is a


domain where each object has a unique name.
A schema is in some ways tightly bound to database users but in other ways shares no implicit
connection; however, its included as part of the SQL-92 standard specification. A database
may contain many schemas, or only the default namespaces created by the server: dbo, sys,
and INFORMATION_SCHEMA, along with the schemas for each fixed database role.
A little history on this topic will help to explain how it works. Prior to the 2005 version of
SQL Server, there was the concept of an owner for each object. This was the third part of the
four-part naming structure. Each object in a database followed this form:
server_name.database_name.owner_name.object_name
For example, say the user Tia created a table called Horses. The full name of the table would
be Tia.Horses in the database, and no other object could have that name. There could be
another table created by the database owner, dbo, as in dbo.Horses, but it would be a separate
object from Tia.Horses, with its own data, permissions, and storage.

TAKE NOTE

The four-part naming structure in SQL Server 2005 and 2008 is no longer server.database
.owner.object as it was in SQL Server 2000. Its now server.database.schema.object.

A user seeking to query the Horses table would need to know whether they wanted to query
Tia.Horses or dbo.Horses. A simple select * from Horses by the user Brian would default to
querying the Brian.Horses table and then dbo.Horses if the first one didnt exist. In other
words, each database user had their own implicit schema based on their username.
Although this was a workable method of separating objects into their own namespace, it created problems when users needed to be removed from the database. Because a user cant be
dropped when they own objects, a user who owned a large number of objects would need all
those objects moved to a new owneressentially, a new namespace. This created a tremendous amount of work in a database of any size and required application changes where code
explicitly referenced a particular namespace.
Starting with SQL Server 2005, the schema has been separated from the owner of an object.
Now a schema creates the namespace, and although its owned by a database user, removing
the user merely requires reassigning the schema to a new owner. All the namespaces remain
the same because the schema name doesnt change and the owner has no effect on the security
of the namespace. This is known as user-schema separation.
A schema is essentially a grouping mechanism in a database; allowing a number of objects
to be classified in the schemas namespace makes it easy to assign higher-level permissions to
roles or users by granting them permissions to the schema or making the schema their default
schema. Just as a role lets you group users, the schema lets you group objects.
Every object in the database must belong to a schema. If one isnt specified when the object
is created, the object falls under the default schema of the user creating the object. A schema
can have any valid SQL Server name, but the schema names in a database constitute their
own namespace and must be unique. Often, a client-side application or a portion of one
is used to group a number of objects, tables, views, stored procedures, and so on, into a
single namespace and permissions assigned to that namespace to simplify maintenance.
For example, if the application deals with human resources data as managed by a client
application developed in Access Projects, the schema might universally for that software
program be personnel.

134 | Lesson 6

TAKE NOTE

The default schema for


users is the dbo schema.

In large applications, it is common for the application code to have its own authentication
method. In such cases, the use of users and schemas in SQL Server is much less important. In
such cases, the use of Application Roles could be important. Application Roles are discussed
later in this lesson.
The default namespace for a user appears on the user Properties page (shown in Figure 6-1). You
can change this to any schema in the database using this dialog box or the T-SQL ALTER
USER command.

Figure 6-1
User default schema

LAB EXERCISE

Perform the exercise in your lab


manual.

Each schema also has an owner who owns it like any other object. Figure 6-1 shows the
check boxes just below the default schema that you can use to specify ownership of schemas
by this user. A role can also own a schema, as shown in Figure 6-1. Because a role can own a
schema, multiple users can own a schema, through role and/or group membership or specific
inclusion, simplifying permissions for that group of objects. Exercise 6.4 walks you through
creating a schema and assigning an owner.

Granting Database Roles


THE BOTTOM LINE

WARNING The exception


to the previous paragraph is the
public role. Because every user is
a member of this role, any rights
to this role are extended to every
database user. Its recommended
that no right be granted to this
role and that you instead use a
user-defined role.

Use roles to define access categories. Then control the permissions using the role container.
Add or remove users, as needed, to accommodate changing employee assignments.

Database roles are the last of our internal security mechanisms for assigning rights to principals and allowing access to securables. You are introduced to the three types of database roles
in this section: fixed database roles, user-defined database roles, and application roles.
You use these roles to group users so you can easily assign permissions to them. Its recommended that permissions not be assigned to individual users, only to roles, just as its recommended in AD to assign ACL permissions to groups instead of individual users. Then, you
can add users to the role to receive the appropriate rights. This approach greatly reduces the
administrative burden when objects and users are added to and removed from the database.
The three types of roles are all used differently.

Working with Fixed Database Roles


Fixed database roles are analogous to the server roles discussed earlier. These are roles created by SQL Server with specific rights that cant be changed; these roles cant be deleted.
Table 6-2 describes the fixed database roles.

Designing SQL Server Service-Level and Database-Level Security | 135


Table 6-2
Fixed database roles

R OLE

D ESCRIPTION

db_accessadmin

This role allows members to grant or revoke database access to logins.

db_backupoperator

This role allows its members to back up the database.

db_datareader

This role allows members to read data (SELECT) from all user tables
in the database.

db_datawriter

This role allows members to change data in all user tables (INSERT,
UPDATE, and DELETE). It doesnt imply that you can read data.

db_ddladmin

This role can carry out any data definition language (DDL) statement
in the database.

db_denydatareader

This role is prevented from reading data from all user tables.

db_denydatawriter

This role is prevented from adding, changing, or deleting information


from any user table in the database.

db_owner

This role is the highest-level role in the database and can perform
all configuration or management operations in the database. This
includes dropping the database.

db_securityadmin

This role can modify the permissions and roles in the database.

public

This role initially has limited rights to objects in the database, but
its assigned to every user. This role cant be removed. Its a userdefined role in its permissions, but its mentioned here because the
server creates this role.

Because these are fixed roles, they arent suited to securing your individual objects. Instead,
most of these roles are used for administrative functions or widespread access to objects.
The db_datareader and db_datawriter roles are usually granted to developers because they
allow access to all tables. Granting these rights to individual users means they can access all
data in all tables. If you create a new table that you only want a limited number of users to
access, anyone in this role will still have access. For this reason, limit the use of these roles to
nonproduction databases.
Similarly, the db_denydatareader and db_denydatawriter roles have far-reaching effects. These are
good roles in specialized situationsspecifically, auditing and read-only situations, respectively.
The public role is unique in that, although its created by SQL Server, it has no explicit rights
to your objects. This role has limited rights to read system views by default and is assigned
to all users. This assignment cant be removed from any role. However, the administrator can
change the permissions of this role, as discussed in the next section.

LAB EXERCISE

Perform the exercise in your lab


manual.

Like server roles, these should be assigned only in accordance with the least privileges needed
for a job function. For example, assigning the db_owner role to a developer who only needs
to run DDL and back up the database is a poor practice.
Exercise 6.5 shows how to add a user to a role.

Working with User-Defined Roles


CERTIFICATION READY?
Ensure that you know
the fixed database roles.
Expect at least one exam
question on this topic.

User-defined roles are similar to database roles in that they allow multiple users to be
added. User-defined roles can own schemas, and they can have permissions assigned to
them. However, these roles arent added when SQL Server is installed; rather, the administrator creates them as needed. These roles are analogous to Windows Active Directory
groups that are used to combine a series of users for easy management of permissions.

136 | Lesson 6

Its recommended as a best practice in SQL Server that rights to securables not be granted to
individual database users but rather be granted to roles. Each role should be granular enough
to provide security to a set of functionality, but not so granular as to require a cumbersome
amount of administration. Most databases have two to five roles, one for each major section
of functionality or group of users that will access the database.

WARNING Granting permissions to public means you cant


easily revoke those permissions
from a specific user later if need be.

As mentioned in the previous section, the public role is available in every database and is
automatically assigned to every user in the database. Even though this role is created by SQL
Server, the administrator can modify the permissions for this role, just like any user-defined
role. Any permission assigned is granted to every user in the database, just as any specific
denial of access prevents every user from accessing that object. Just as its recommended that
you not grant explicit rights to the Everyone group in your Active Directory domain, its recommended that you not grant rights to this group in a database. Instead, create another role,
and assign the specific rights you require to that role.
The administrator can create any other role he or she chooses in order to assign varying permissions to users. These roles you create must have a unique name in the database that conforms to the SQL Server object-naming rules. You can create thousands of roles, but doing
so isnt practical. Instead, you should seek to create roles that mimic the major job functions
along which you typically divide the users access.
Once you create a role, you can assign permissions to it using the GRANT, REVOKE, and
DENY statements in the same manner that you assign permissions to any user. These statements are discussed in detail in Books Online with examples and syntax elements defined.
The permissions you assign should be the necessary permissions to enable the role to work
with whatever data it needsand no more.

TAKE NOTE

LAB EXERCISE

Perform Exercise 6.6 in your


laboratory manual.
TAKE NOTE

Typically youll assign


rights to the appropriate
role(s) when an object is
created.
CERTIFICATION READY?
A user has a logon
with table permission
in a database. The user
also connects to the
same tables through an
application role. In this
ambiguous situation,
what permissions does
the user have?

You shouldnt perform a blanket assignment of permissions. An auditing role doesnt need
INSERT, UPDATE, and DELETE permissions to tables, and they shouldnt be granted
along with SELECT permissions. Grant a role the specific permissions needed to an object
rather than granting all rights.
As with a user, this role can be assigned explicit permissions to perform functions in the
database, such as creating tables, performing a backup, and so on. If you must assign these
permissions and a fixed database role doesnt meet your needs, then you should assign them to
a specific role, with users added to the role. The role can also own schemas, which gives the
members the right to work with objects under those schemas.
Exercise 6.6 walks through creating a role, assigning it to a user, and assigning explicit rights
to two tables.

Using Application Roles


Application roles are a unique kind of role in SQL Server. As in previous versions, the
application role isnt assigned to a user; rather, its invoked by a user whos already connected. Once set, users assume the rights granted to the application role and execute
everything in the context of this role, rather than their previous context. Users also lose
any rights they had before invoking the application role.

The application role is created with the CREATE APPLICATION ROLE command, which
requires a role name and a password. This password is how the application role is secured. A
user cant invoke this role without the password, which is usually secured in an application.
This ensures that only that particular application can access these objects.
As with user-defined database roles, the application role can be granted permissions to schemas
and individual objects. You do this in the same ways shown for user-defined database roles. These

Designing SQL Server Service-Level and Database-Level Security | 137

rights arent assigned to users, and users arent granted the application role; instead, the password
is used to move a user into the application role. These rights remain in effect until the user
disconnects from SQL Server or unsets the role using the stored procedure sp_unsetapprole.

TAKE NOTE

To revert to your original permissions without disconnecting and reconnecting to SQL


Server, you must save a cookie of information when you execute sp_setapprole.
Application roles arent widely known, but theyre valuable resources. If you allow your users
to connect to SQL Server with their Windows account or SQL Serverauthenticated account
and assign permissions to that user, then they can connect with any application. This means
that in addition to using your ERP application to work with data, a user could also connect
with Microsoft Excel or another ODBC-compliant program and manipulate data outside the
expected business application. If necessary business rules are embedded in the application and
not the database, they may be bypassed. In cases where you want to ensure that only a particular application is used to access certain data, an application role can enforce this limitation
if it can be set to invoke the application role.

Introducing DDL Triggers

THE BOTTOM LINE

SQL Server supports the concept of a DDL (data definition language) trigger, which responds
to events that define objects in SQL Server. The primary events are the CREATE, ALTER, and
DROP statements and their variants. Because these statements fundamentally alter the way in
which the server can work with the addition or deletion of objects, a trigger allows auditing or
greater control over these types of changes. The objects affected by DDL statements can be data
objects (tables, views, procedures, and so on) or principal objects (logins, users, roles, and so on).
Triggers have been a part of SQL Server since its inception. Triggers are sections of code that
execute in response to some event. Prior to SQL Server 2005, triggers were limited to data
manipulation language (DML) events. These are INSERT, UPDATE, and DELETE statements that modify or manipulate data.
Just like a DML trigger, which fires when a particular event occurs, a DDL trigger fires when a
DDL event for which its set up occurs. These types of triggers are more complex and use different internal structures to determine what data is available from the event. DDL triggers execute
after the T-SQL statement is complete. These triggers cant be used as INSTEAD OF triggers.
The following sections discuss the scope, events, and recommended policy for DDL triggers.

Understanding DDL Trigger Scope


A regular DML trigger is scoped to the table against which its created. When the particular event for that table, and only that table, is executed, the trigger fires and performs
its actions. A DDL trigger, however, is scoped differently because the events for which
it fires arent tables in a particular database/schema combination. Instead, a DDL trigger
has two scopes: server-wide and database-wide.
A server-wide scope means that any time the particular event executes anywhere on the server
instance, in any database, this trigger fires. An example is the CREATE ENDPOINT
command. If a server-wide trigger is set for this event, then it will execute if an endpoint is
created in any database that exists on the instance.
A database-wide scope, in contrast, means that only those events executing in a particular database cause the trigger to fire. If a CREATE USER DDL trigger is created in the Sales_Prod
database and scoped to that database, then if a CREATE USER statement is executed in the
Sales_Dev database, the trigger doesnt fire. This limits the ability and necessity of the trigger

138 | Lesson 6

to perform any action across the server. A separate DDL trigger for the CREATE USER event
would need to be created in the Sales_Dev database to track events in that database.
Using scope can limit the execution of these triggers to only those events that need to be
acted upon. Often, a database administrator is concerned about events in one database that
arent important in another.

Specifying DDL Trigger Events


When a DML trigger is created, you specify the particular event (or events) in the code. For
example, a trigger to copy information from an inserted sales invoice might look like this:
CREATE TRIGGER sales_insert ON Sales FOR INSERT
AS
INSERT sales_audit (invoice, customer, salesdate)
SELECT invoice, customer, salesdate
FROM inserted
GO

This trigger notes the scope (the Sales table) and the event (INSERT). Multiple events can be
included if necessary.
You can also set a DDL trigger to respond to events, as shown here:
CREATE TRIGGER NoDrop
ON DATABASE
FOR DROP_TABLE
AS
PRINT Disable Trigger NoDrop to drop tables
ROLLBACK
GO
TAKE NOTE

Events that occur at a


database level, such as
CREATE USER, can
be captured in a server
instance scoped DDL
trigger.
Figure 6-2
DDL trigger events

This trigger is scoped for the current database and is fired in response to a DROP TABLE
command, which would fire the DROP_TABLE event. See Books Online for the events that
occur on a SQL Server.
In addition to events, a DDL trigger can also respond to event groups. These are broader classifications of events, such as the DDL_LOGIN_EVENTS group, which covers the CREATE
LOGIN, ALTER LOGIN, and DROP LOGIN events. These groups are linked in a tree, a
portion of which is shown in Figure 6-2. The complete tree is available in Books Online and
is extensive, covering server-level and database-level events.

Designing SQL Server Service-Level and Database-Level Security | 139

Using an event group instead of the event class ensures that you dont forget an event related to
that class. Its easier to administer and work with one trigger that tracks all LOGIN events than
it is to create three separate triggers for each of the CREATE, ALTER, and DROP events. Be
aware of the tree structure, however, because lower-level events will cause the trigger to fire. The
code in your trigger must be able to handle all the events in your group to work properly.

Defining a DDL Trigger Policy


With the addition of DDL triggers, the administrator has a great deal more administrative and auditing capability than existed before, without requiring the overhead
of running traces and analyzing their output. However, this means you need to judiciously use this capability to avoid placing an undue burden on either the server or
the administrator.
Before discussing how to define your trigger policy, please examine a few more features of
DDL triggers that may impact how you use them. First is the capability of having multiple
triggers fire for an event. You can set up two separate triggers for the CREATE USER event
and have them both fire when this event occurs. This may be necessary for any number of
business reasons, but you must set two factors in your policy.

WARNING Beware of
encrypting triggers. If the code is
encrypted, the trigger cant be replicated. Also note that DDL triggers
arent fired in response to temporary table operations. This is an
inherent limitation and means you
cant prevent or audit these events
with these triggers. Make sure
all developers and administrators
know that as part of your policy.

The first is the firing order. This can be important, because one trigger may depend on
the other having already executed in order to function as designed. You can change the firing order of your triggers, but this must be done explicitly; and you should ensure that all
instances of multiple triggersDML and DDLhave the firing order set and known.
Second, each trigger consumes resources. Having multiple triggers means more work for the
server to set up the execution environment as well as more potential work in the same transaction. Dont overload the server with unnecessary triggers. If possible, ensure that triggers are
consolidated, perhaps requiring code reviews for multiple-trigger situations.
Another feature of triggers to be aware of is the capability for code encryption. This is discussed in the next section, Defining a Database-Level Encryption Policy.
DDL triggers also require different coding structures to access the data about events. Unlike
DML triggers, which use the inserted and deleted tables, a DDL trigger requires you to work
with event data. Make sure anyone using these triggers has been properly trained to gather
the data.

REF

Lesson 4 covers designing


security infrastructure.

As has been shown, these triggers are powerful but much more complex than DML triggers. Therefore, you need a more extensive policy to deal with their use in your servers. The
features mentioned should be noted in your policy, but you should also have guidelines indicating where and why these triggers are created. As discussed in Lesson 4, when you look at
designing a security infrastructure, there may be regulatory or industry guidelines about controlling or auditing various types of events, most notably security changes. DDL triggers can
provide a way to do this, but dont overuse the triggers and cause a large burden on the server
or company. For example, preventing new logins may provide some level of security, but if
the administrator cant disable the trigger when needed in a timely fashion, the business may
be negatively affected.
These triggers also need to be monitored for the information they may return or store. This
generally means an auditing environment, so you should have a policy indicating how this
data is secured and made available when needed. Security for the DDL trigger data is as
important as the security of the data used by the company, because it can show whether the
company data is being properly accessed or compromised.

140 | Lesson 6

Defining a Database-Level Encryption Policy


THE BOTTOM LINE

SQL Server lets you use the WITH ENCRYPTION keywords in the CREATE or ALTER
statement to encrypt the data that makes up this object. This prevents anyone who gets a
copy of the database from reading the T-SQL code that forms the object.

In Lesson 5, you studied an overall encryption scheme and policy for SQL Server using the
encryption mechanisms that turn plain-text data into ciphertext. The keys exist at a server
or database level, and the encryption occurs at a table or column level. As mentioned in that
Lesson, the minimum amount of data should be encrypted. This ensures good performance
of your SQL Server. Your policy should be specified at a corporate level, with exceptions
documented as needed.
TAKE NOTE

The encryption option


cant be used with CLR
assemblies.

WARNING It is very important to keep a copy of the original


code. The encrypted object code
cant be recovered.

Another type of encryption is available in a database: encryption of code. A number of code


objectsstored procedures, functions, triggers, and so onstore the plain-text code of the
object in the database. This means any user with rights to read the sys.syscomments table can
read the code for the object and potentially misuse that information. This code may also be
proprietary and a part of your companys intellectual property.
When you encrypt the code for an object, it cant be read from the server, which provides a
degree of protection for how the object works. This security feature can help prevent malicious users from examining your code for vulnerabilities to SQL Injection or other hacks as
well as protect your intellectual property. However, encrypted code requires that your developers be careful with the original source.
Your policy regarding encryption should specify whether this feature of SQL Server will be
used. If you choose to use it, its recommended that it be applied to each object type and not
to each object. In other words, specify that all triggers be encrypted rather than some being
encrypted and some not.

Transparent Data Encryption


SQL Server 2008 includes a new feature known as transparent data encryption (TDE).
This is a method of encrypting an entire database without requiring the modification
of any application code. This encryption is transparent to the application code as SQL
Server 2008 automatically handles the encryption and decryption of all data going in
and out of the database. The primary purpose of this TDE feature is to have the entire
database encrypted so that any unauthorized person having direct access to copies of the
database files and/or transaction log files cannot decrypt and read the data. This ensures
that if for example, backup files on tapes fall into the wrong hands, the data is still secure
from unauthorized access.
While TDE is implemented for individual databases, it is actually partially an instance level feature
of SQL Server 2008 as the instances tempdb is also encrypted when TDE is active. Different databases within a single instance can be encrypted with TDE using different encryption algorithms.
Tempdb is encrypted when any one database in the instance is encrypted and tempdb, when it is
encrypted, is always encrypted with the AES_256 algorithm.
As TDE is a transparent database-level encryption methodology, individual columns of data can still
be encrypted. This allows existing code and encryption designs to continue to function.
TDE is established by using a certificate stored in SQL Server. This certificate is based on
the master key also created and stored in SQL Server. The certificate must be in the master
database. An example of encrypting an entire database with TDE is shown below.

Designing SQL Server Service-Level and Database-Level Security | 141


USE mydatabase
GO
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM AES_128

ENCRYPTION BY SERVER CERTIFICATE mycert;

REF

Lesson 5 examined keys,


certificates, and encryption algorithms.

GO
ALTER DATABASE mydatabase
SET ENCRYPTION ON;
GO

It is critically important to understand that the database master key and the encryption certificate need to be backed up to a secure location. This location also needs to be separate from
regular backups or other copies of the database files. The encryption security provided by TDE
is meaningless if database files and the certificate both fall into the hands of the wrong person.
Further, for disaster recovery or other restore operations to a different server, the certificate will
be required for restoring a TDE encrypted database. You can think of the certificate as the key
to unlocking your database. Certificates are very rarely changed so securing a backup copy of
critical certificates should be an easy activity.

Securing Endpoints

THE BOTTOM LINE

REF

Lesson 10 discusses
database mirroring.

Every network communication with SQL Server takes place through an endpoint, which
is a communication point for SQL Server. Endpoints exist for the protocols clients use to
communicate with SQL Server as well as for database mirroring, the Simple Object Access
Protocol (SOAP) and Web Service requests, and the Service Broker.

When the server is installed, a Tabular Data Stream (TDS) endpoint is created for each protocol that is enabled. Table 6-3 shows the protocol endpoints and the default names as set up by
SQL Server. Each protocol requires an endpoint with a unique name.

Table 6-3
Default protocol endpoints

P ROTOCOL

D EFAULT E NDPOINT N AME

Dedicated Administrator Connection (DAC)

Dedicated Admin Connection

Named Pipes

TSQL Named Pipes

Shared Memory

TSQL LocalMachine

TCP/IP

TSQL Default TCP

VIA

TSQL Default VIA

The Shared Memory, Named Pipes, and DAC protocols have only one endpoint per instance.
The VIA and TCP/IP protocols have a default, but the administrator can create additional
endpoints for various services.
Each endpoint exists as an object in SQL Server, like many other objects, and permissions
are granted to allow its use. You can apply the typical GRANT, REVOKE, and DENY permission statements to the endpoint using the ALTER, CONNECT, CONTROL, TAKE
OWNERSHIP, and VIEW DEFINITION permissions. Each of these permissions is similar

142 | Lesson 6

to the same object permissions discussed in Lesson 7. The exception here is the CONNECT
permission, which isnt associated with most other types of objects.
Each of these endpoints enables a communication path into SQL Server, but they all function
slightly differently with different options and potential security issues. The following sections
discuss each type of endpoint.

Introducing TDS Endpoints


A TDS endpoint is one that is built to accept TDS communications. These are the
standard protocol communications used by Management Studio, ActiveX Data Objects
(ADO), Open DataBase Connectivity (ODBC), and most other SQL Server clients. The
TDS protocol is encapsulated in the underlying transport protocol (TCP/IP, Named Pipes,
and so on) and contains the T-SQL batches that are submitted for most applications.

TAKE NOTE

The Dedicated Admin


Connection endpoint
is an exception. Only a
sysadmin login can
connect using this
endpoint.

The default permissions to an endpoint allow all users to connect. This permission is implicitly granted to a login when its created. You can change this for the TDS endpoints by
DENYing access to Everyone and then granting explicit permissions to each login that will be
allowed to connect.
Each endpoint can be associated with any IP or one particular IP on the server. Its also associated with a port. When you create a new endpoint, you can specify both of these parameters
to configure how clients will be allowed to connect to the SQL Server. For dynamic ports, the
default TCP endpoint is used. You can create additional TCP/IP listeners in the SQL Server
Configuration Manager, under the Network Configuration section, and then associate them
with an additional endpoint. VIA connections are treated the same as TCP/IP connections.

Using SOAP/Web Service Endpoints


SQL Server 2000 could create a web service and respond to queries using HTTP, but it
required integration with the Internet Information Server (IIS) on the Windows host.
Starting with SQL Server 2005, the database server can natively respond to HTTP
requests with an endpoint using SOAP.

SOAP allows method calls to be mapped to stored procedures or ad hoc batches to be sent
to the server using the HTTP or HTTPS protocol. Its often used in web services as a way to
programmatically access methods on a remote server.
When an endpoint is created for use with SOAP calls, it must be specified with not only the
port to be used, but also the protocol (HTTP or HTTPS) along with the type of authentication that will be allowed. Five types of authentication are available; theyre listed here from
least secure to most:

TAKE NOTE

An important facet:
Anonymous authentication isnt supported
with endpoints. The
user must be a valid
Windows user.

Basic. One of two mechanisms in the HTTP 1.1 specification. The username and password are encoded in the header using base64 encoding. Requires https communications.
Digest. Second mechanism in the HTTP 1.1 specification. The username and password
are hashed using MD5 and compared on the server. Only domain accounts can be used.
NTLM. Authentication method used in Windows 95, 98, and NT4.
Kerberos. Supported in Windows 2000 and later. A standard authentication mechanism
used by Active Directory and requiring that a service principal name (SPN) be registered
for SQL Server.
Integrated. Allows NTLM or Kerberos methods to be used.
Each of these types has pros and cons, although only one at a time is associated with an endpoint. You can change the authentication type using the ALTER ENDPOINT statement.
Kerberos is preferred, and Basic is the last choice in terms of security.

Designing SQL Server Service-Level and Database-Level Security | 143

Each SOAP endpoint also requires a unique path on the Windows server that equates to a
virtual directory in IIS. This isnt necessarily a security issue, although setting standard paths
on all SQL Servers gives potential attackers information they can exploit.
One specific security recommendation for SOAP endpoints is that you should specify only
those methods actively being used as available with this endpoint. If methods are retired, you
should remove them from the endpoint. SOAP endpoints can also support ad hoc batches,
but this capability is disabled by default.

Working with Service Broker and Database Mirroring Endpoints


Service Broker and database mirroring endpoints share many of the same options and
security issues. The Service Broker provides a queuing mechanism in SQL Server. Lesson
10 discusses database mirroring with other high-availability technologies.
For both of these endpoints, you have the option of using a certificate for authentication by
the endpoint. This means the private key of a certificate is used on this server, and the client
must have the matching public key for authentication. Because certificates expire and there
are sometimes issues with transfer and renewal between servers, you also have the option of
using Windows authentication methods (NTLM or Kerberos).

TAKE NOTE

If both sides of the


endpoint specify different algorithms, the one
specified by the receiving
end is chosen.

WARNING RC4 is weaker


than AES, but its considerably faster. Choose based on your primary
need: performance or security.

Unlike the other endpoints, however, these endpoints provide a fallback option. The connection can be attempted with either a certificate or Windows method and then fall back to the
other. You specify this along with which type of authentication to use first in the CREATE
ENDPOINT or ALTER ENDPOINT statements.
Its recommended that you not allow the fallback. If you truly need certificate authentication,
then you should specify it and not let it fall back to Windows authentication. Often a backup
connection method becomes permanent because it works and employees wont seek to fix the
primary method. If you allow this to stop communications when it doesnt work, it will force
people to fix the primary method of communication.
One other security mechanism is available for these endpoints: encryption. The connection
used for this endpoint can be set to not use encryption, to allow it if the client is capable,
or to require it in all communications. The default is to require encryption using the RC4
algorithm. The endpoint specifies whether the AES algorithm will be used instead, or if either
algorithm can fall back to the other.
In general, you should require encryption if all clients can support it. If not, then specifying
the SUPPORTED option will allow encryption to be used when possible. Avoid disabling
encryption for the critical services if at all possible, because it means data is being communicated in clear text.

Defining an Endpoint Policy


The security policy for endpoints is similar to that for most any other SQL Server component: Grant only the minimum rights needed for this object. In addition, because an
endpoint is an active listener for communications, it can be started, stopped, or disabled.
Its recommended that only the endpoints needed for T-SQL communications be started
and active on any server. If you no longer need an endpoint, you should disable it if you
may need it again or drop it if not.
This applies to protocols because there is sometimes a temptation to enable all protocols on a
server. If there are no named pipes or VIA clients, then these protocols and their corresponding
endpoints should be removed. Doing so reduces the surface area available for attack and increases
the security of your server.
The state of each endpoint is also a security item that should be addressed in your policy. The
three states differ in how they affect the security of your server. As mentioned previously, if an

144 | Lesson 6

endpoint isnt being used, it shouldnt be started; but should it be stopped or disabled? The server
responds differently to these two states, and this is important in deciding on security policy. In
the disabled state, the endpoint doesnt respond to client requests, which is the same as if the
endpoint hadnt been created. If an endpoint isnt being used currently, it should be in this state.
In contrast to the disabled state, the stopped state doesnt respond to client requests but returns an
error. This is similar to the 404 errors returned by web servers when a page doesnt exist. This state
doesnt allow clients to connect to the server, but it does tell a client that an endpoint exists on this
port that may be started again. The stopped state is useful for temporary service interruptions, like
maintenance activities being performed on the endpoint; however, you shouldnt use it as anything
more than this. If the endpoint wont be used for any length of time, you should disable it.
Finally, its recommended that you use secure communicationsHTTPS for SOAP endpoints and encryption for Service Broker and database mirroringto make sure your transmissions arent intercepted while in transit. The defaults for the various types of endpoints are
set in accordance with the Trustworthy Computing Principles in mind, requiring Windows
authentication and encryption by default.

Granting SQL Server Agent Job Roles


THE BOTTOM LINE

The msdb database has three fixed database roles that allow you to assign permissions relating
to SQL Server Agent to users who arent sysadmins.
SQL Server Agent is an extremely useful component in the SQL Server platform, letting you
schedule tasks that need to be performed both in SQL Server and on the domain. In the past
with SQL Server 2000, a broad scope of permissions was required to use the Agent, but starting with SQL Server 2005, a number of roles have been introduced allowing finer-grained
control over the security for this subsystem. You can also specify proxies for jobs instead of a
central proxy for all non-sysadmins as in SQL Server 2000.
As with other security decisions, your policy should grant the least privileges required to
logins in order to allow them to perform their jobs. The three roles are:
SQLAgentUserRole. This is the least privileged role for using the SQL Server Agent. It
allows the user to work with single-server jobs for that instance only and to enumerate operators but not change them. Only jobs owned by the user can be examined and changed. The
job history cant be deleted.
SQLAgentReaderRole. This role includes all the privileges of the SQLAgentUserRole, but
it can also work with multiserver jobs and view their history, properties, and schedules. This
role cant edit those multiserver jobs nor delete job history.
SQLAgentOperatorRole. This role includes all the privileges of the other two roles as well
as the ability to list alerts, operators, and delete the job history from the local server. This role
can also enable or disable jobs.
None of these three roles is as powerful as an administrator, and none can create or edit alerts,
operators, or proxies. However, if youre delegating the ability to start jobs, you should consider using one of these roles to give limited privileges to certain users.

Case Study: Specifying Proxies


SQL Server Agent lets you specify a proxy for individual job steps. In this case, the job
will run under the credentials of the proxy instead of the owner of the job or the SQL
Server Agent service.
Only sysadmins can add, edit, or delete proxy accounts, so there is no security to delegate
here. However, the proxy is restricted to a particular subsystem of SQL Server, so the

Designing SQL Server Service-Level and Database-Level Security | 145

sysadmin should decide to which subsystem(s) the proxy has access. These are the
subsystems:

ActiveX Scripts
Operating System Commands
Replication Distributor
Replication Merge
Replication Queue
Replication Snapshot
Replication Transaction-Log Reader
Analysis Services Command
Analysis Services Query
SSIS Package Execution
Unassigned

As with roles, you should create proxies for different types of jobs or actions that they need
to perform. However, they shouldnt have more permissions than necessary to complete a
particular job step. If a series of steps requires access to Integration Services packages only,
and another series of steps accesses the operating system only, you shouldnt create a proxy
with rights to both systems and use it in both cases. Instead, create two separate proxies.
Your policy for creating proxies should also seek to minimize the rights required for
each job and to share proxies in job steps only insofar as theyre doing the same work as
another job step.

Designing .NET Assembly Security

THE BOTTOM LINE

Starting with SQL Server 2005, there has been a huge increase in the capabilities of stored
procedures and functions. This involves the integration of the .NET Common Language
Runtime (CLR) with the database server. This lets you use any .NET language, from C#
to VB.NET to Perl.NET, to write a series of methods that can be wrapped in a function or
stored procedure and called in any T-SQL batch.

The assemblies that can be called from within SQL Server must first be registered in the SQL
Server, similarly to the way you must register DLL code with Windows using the CREATE
ASSEMBLY command. This registration command allows the user to specify how the security
for these assemblies is controlled by the server. There are three levels of security for assemblies:
SAFE, EXTERNAL_ACCESS, and UNSAFE.

Setting SAFE
SAFE assemblies are completely written in managed .NET code and are intended to
access resources only within SQL Server. Computations and business logic can be performed with data in tables, but there is no access outside the SQL Server, including the
Windows host file system and API calls.

This is the most restrictive level of security for a .NET assembly. If requirements dictate that
a .NET assembly perform the computation on only SQL Server data, this is the level of security you should set. There are some restrictions on the assembly, such as the fact that it must
be type-safe and a few other limitations on the programming capabilities allowed.

146 | Lesson 6

If the CREATE ASSEMBLY permission is granted to a user, then that user can create assemblies with this level of security.

Setting EXTERNAL_ACCESS
Assemblies that must access resources outside of SQL Server, such as the Windows host
file system, the network, the local registry, or web services, are usually secured with
EXTERNAL_ACCESS security level. These assemblies are still written as managed code
and must be typesafe, but they can make limited access outside of the SQL Server. These
assemblies can access memory buffers owned by the assembly in the server.
If an assembly requires this level of access, then this is the preferred level of security because
there are still restrictions on the programming of the code. The login creating this assembly,
however, requires the EXTERNAL_ACCESS ASSEMBLY permission in addition to the
CREATE ASSEMBLY permission.

Setting UNSAFE
UNSAFE assemblies are completely unrestricted by SQL Server and can access any resource
on the local machine or the network. These assemblies can access memory buffers in the
SQL Server process space and call unmanaged code, such as legacy COM components.
There are virtually no restrictions on the type of code that can be called from an UNSAFE
assembly, which can severely affect the stability of SQL Server. Because of the potential issues,
only a member of the sysadmin server role can create an UNSAFE assembly.
Its recommended that you require extensive code reviews by very experienced developers
before allowing UNSAFE assemblies on your server.

S K I L L S U M M A RY
Developing security policies at the SQL Server service level is much more complicated in SQL
Server 2005 and 2008 than in previous versions. With a change in the paradigm of how the
server security is structured, the administrator must better understand the new capabilities
and the ramifications of using them.
The login and role structure is similar to previous versions, but changes allow more granular
control of permissions; administrators should study these areas. This is
especially true for the SQL Server Agent permissions and roles.
Some of the new structures, such as DDL triggers, endpoints, and .NET assemblies, mean that
the designer of a security policy must address new areas. Doing so requires extensive work to
understand how these new features work from a security standpoint.
This lesson has broken down SQL Server security to those areas that affect the overall server.
This completes two thirds of the security structure for SQL Server. The next Lesson will discuss
the most granular security, at the object level.
For the certification examination:

Understand the different types of SQL Server logins. You should know different types
of logins available in SQL Server, Windows-authenticated logins, and SQL Server
authenticated logins, as well as their differences.

Understand the server roles available in SQL Server. Know the different server roles,
including their capabilities and their restrictions.

Designing SQL Server Service-Level and Database-Level Security | 147

Understand database users. You should understand what a database user is and how its
mapped to other principals or securables.

Understand schema concepts. Know what a schema is and its role in security management.

Understand database roles. Know the three types of roles in SQL Server, understand the
differences, and know when to use them in your security design.

Know how to work with endpoints. Understand what they are and how they impact security.

Understand what DDL triggers are. These triggers are different from regular triggers and
you need to understand what they are and how they work.

Know how to secure SQL Server Agent jobs. The SQL Server Agent subsystem runs alongside the other SQL Server services, and you must understand how this job system impacts
the overall security of the database platform.

Understand how to secure .NET assemblies. You should understand how security works in
regard to .NET assemblies and the implications of the various settings.

Knowledge Assessment
Case Study
Herd of Two
Herd of Two stables is a large facility that boards and trains horses. It exists on 400 acres
in Colorado and employs 12 administrative and technical people to handle its computing infrastructure. In addition, 32 other stable hands have requirements to interact with
terminals to use the time-card system and to track horse care.

Planned Changes
Herd of Two stables is currently running one SQL Server 2000 instance on Windows
2000 for its three applications. It would like to add two additional instances on separate
servers to improve performance and upgrade to SQL Server 2005 at the same time.
All the servers need to be secured properly. The stable hands use SQL Server logins
because they dont have accounts set up in the AD; however, they need to conform to
the password policy present on the domain.
There is also the need to use encryption to protect the personal information of clients
along with their credit-card billing information.

Existing Data Environment


Currently, three applications are used on SQL Server 2000. One is the financial system
to handle all billing and accounting functions, which is accessed by only a few employees.
The second application is the horse-care system, which contains the instructions for
feeding, medicating, and training. This application is accessed by almost all employees.
There is also an application for tracking time worked by employees, which is connected
to hardware devices that print time cards. These devices are programmable and run a
small client application that logs in to SQL Server to record employee IDs and time
notations.

Existing Infrastructure
The current servers are running Windows 2000. There are enough licenses to cover new
servers, so its assumed Windows 2000 will be installed on the new servers.

148 | Lesson 6

The hardware for the new servers as well as the existing server meets the requirements
for SQL Server 2005.

Business Requirements
All employees using SQL Server logins must abide by the password policy, which is set
on each machine using AD Group Policy.
The barn foreman has requested that he be notified whenever a new stable hands login
is added to the SQL Server to be sure they have been trained properly before receiving
access.
Its requested that a high-availability system be set up between two of the servers for the
horse-care system, but no money is available for a clustering solution.
The barn foreman has requested the ability to perform backups of the horse-care database during the afternoon when all horses have received their medication.

Technical Requirements
The applications can be altered to handle the encryption needs; however, the hardware
cant handle large key lengths.
The SQL Servers must run various jobs on demand. The IT staff isnt always available,
so a secure solution is desired that will allow the barn foreman to execute a few jobs on
the horse-care system server.
Some enhancements to the three systems are planned using the CLR integration.
Various assemblies will need to be installed on the servers in a secure manner.

Multiple Choice
Circle the letter or letters that correspond to the best answer or answers.
Use the information in the previous case study to answer the following questions.
1. The new servers will be installed with Windows 2000 and SQL Server 2005. What type
of password requirements will be enforced?
a. Password expiration
b. Password policy (length, content)
c. Both of the above
d. None of the above
2. Which of the following algorithms would be best suited to provide encryption without
taxing the server hardware?
a. RSA 1024
b. AES_256
c. DES
d. Triple DES
3. To allow the barn foreman to run certain jobs on one SQL Server 2005 server, which
role should they be assigned?
a. sysadmin
b. SQLAgentOperatorRole
c. SQLAgentReaderRole
d. SQLAgentUserRole
4. One of the assemblies that will be used to enhance the horse-care system requires access
to read an RSS feed from an Internet Web site. What level of permissions should it be
installed with?
a. SAFE
b. UNSAFE

Designing SQL Server Service-Level and Database-Level Security | 149

c. EXTERNAL_ACCESS
d. UNLIMITED
5. When the new servers are installed, database mirroring will not be enabled. Should the
database mirroring endpoint be created to prepare for the future activation?
a. Yes
b. No
6. What would be the preferred method of ensuring that new logins are sent to the barn
foreman?
a. Build an auditing routine into the application, and use it to add all logins.
b. Load the Windows security log into a table at the end of each day, and search for the
creation of a new login.
c. Use a paper form for new logins that requires the foremans signature before a login is
created.
d. Create a DDL trigger that responds to the server CREATE LOGIN event. Use it to
send e-mail to the foreman.
7. To allow the barn foreman to back up the horse-care database, what role should he be
assigned?
a. sysadmin
b. backupadmin
c. db_backupoperator
d. db_owner
8. Enhancements to the horse-care application will require tables in the financial database.
However, the users of the financial database should not access the horse-care tables
directly. What security measure should be used to easily assign permissions?
a. Server roles
b. Schema separation
c. Fixed database roles
d. Application roles
9. Currently, all users of the time-card application can access all tables and stored procedures in that database. However, future enhancements are planned to limit access to
stored procedures only. What role should be assigned to the users of this application?
a. db_owner
b. A user-defined role with specific permissions
c. db_datareader
d. db_datawriter
10. The president of the company is concerned about being able to connect to the servers at any time. Application security is handled in the applications, and the president is
assigned to the user-defined roles in SQL Server 2005 for access. She has no server roles
assigned. She would like to use the Dedicated Admin Connection to be sure she can
always connect. Can you grant rights to this endpoint to her login?
a. Yes, with GRANT CONNECT ON DAC To <login>.
b. Yes, by assigning the login to the sysadmin role.
c. No, there is no way to do this.

LESSON

Designing SQL Server


Object-Level Security

L E S S O N S K I L L M AT R I X
TECHNOLOGY SKILL

EXAM OBJECTIVE

Develop object-level security.

Foundational

Design a permissions strategy.

Foundational

Analyze existing permissions.

Foundational

Design an execution context.

Foundational

Design column-level encryption.

Foundational

KEY TERMS
assembly: A managed application
module that contains class
metadata and managed code
as an object in SQL Server. By
referencing an assembly, common
language runtime (CLR) functions,
CLR stored procedures, CLR
triggers, user-defined aggregates,
and user-defined types can be
created in SQL Server.
common language runtime
(CLR): A key component of

the .NET technology provided


by Microsoft that handles the
actual execution of program code
written in any one of many .NET
languages.
data control language (DCL):
A set of SQL commands that
manipulate the permissions that
may or may not be set for one or
more objects.
execution context: Execution
context is represented by a

login token and one or more


user tokens (one user token
for each database assigned).
Authenticators and permissions
control ultimate access.
permission: An access right to
an object controlled by GRANT,
REVOKE, and DENY data control
language commands.

The purpose of SQL Server, or any relational database system, is to store data and make
it available for use by applications. All the security that is built into SQL Server is for the
purpose of protecting and ensuring only authorized access to your data. This Lesson looks
at the lowest level of securitythe objects. If you have to name it, its an object; if it has a
name, its an object.
Object is a general term for all the entities inside a database that store or interact with
your data. These include tables, views, stored procedures, functions, assemblies, and more.
The users who have access inside a database require permissions in order to use these
objects to read and write data.

150

Designing SQL Server Object-Level Security | 151

Developing a Permissions Strategy


THE BOTTOM LINE

Before you can assign permissions to the objects in your database(s), you must have some
type of strategy for securing the objects and the data they contain. This strategy should
define at a high level how youll handle the requirements for your applications.

The first recommendation that every SQL Server administrator should implement is to
adhere to the basic tenet of security and not assign any more access than necessary to perform
a job. This ensures that no one, whether accidentally or maliciously, accesses data or functions
they arent supposed to access. You control access with permissions.
The use of roles for assigning permissions is another part of a good permissions strategy.
Whether these are fixed roles or user-defined roles, managing permissions is greatly eased by
using roles for all permission assignments and not assigning any permissions to individual
users. Security is a hard process, and by making it easier to administer, its more likely that
your security decisions will be enforced and maintained.

REF

In Lesson 6, you examined the administrative


roles for the server,
databases, and SQL
Server Agent.

This use of roles includes administrative functions. SQL Server allows you to divide the
administrative tasks among a larger number of people without compromising the servers
overall security by granting everyone sysadmin permissions. Its recommended that if your
company uses multiple people to handle various tasks, you should ensure they arent granted
more permissions than necessary.
The next part of your strategy should address whether any permissions are assigned to the guest
user or the public account. Its highly recommended that you dont assign rights to either of
these objects and that you instead explicitly set up other roles to meet your needs. Because these
are shared objects among all users, its harder to remove permissions from them if necessary.
Some administrators have avoided using application roles because doing so requires that an
application be able to execute a stored procedure and switch permissions. However, this is a
great way to ensure that a specific application is used to access data. If you can control the
way the application executes stored procedures on login, this is a good choice. If not, then
you may want to design a policy that forbids the use of this type of role.
Last, you need to determine the degree to which youll allow the object-control permissions such as Alter, Create, and Drop for different objects or schemas. Most users dont need
these permissions, nor are they warranted, because most users wont change the structure of
database objects. However, you may have specific groups of developers or application administrators who need the ability to use these permissions. Your strategy should specify the cases
in which these permissions will be granted or which objects or schemas youll let specific users
change. However, as with individual object permissions, you should use roles to easily assign
permissions to and remove them from users by moving the users in and out of roles.

TAKE NOTE

In assigning data-access permissions and control permissions, you should use separate roles
for each.

Your policy should strike a balance between being as granular as necessary to meet the
business needs of your company and making the role assignment simple enough to ensure it
can be administered. The extreme level would create a role for each user, which would be an
unnecessarily complex administrative burden. Instead, you should assign a role to each major
job function and grant the appropriate permissions to the role. If there are exceptions for an
individual or a subset of the jobs members, create a role for the exceptions and then use a
DENY approach to remove the permissions they shouldnt be assigned. This way, a single role
retains the permissions to a large class of objects.

152 | Lesson 7
LAB EXERCISE

Perform Exercise 7.1 in your lab


manual.

In Exercise 7.1, youll assign permissions to a user-defined database role.

Understanding Permissions
A number of permissions can be assigned to different objects, each with different meanings that affect how your security plan applies to the database users. In this section,
youll briefly look at the various permissions and the objects to which they apply.
Before moving to permissions, you need a basic understanding of SQL Server terminology
related to security. The permissions assignment involves two entities. A principal is a user,
group, or process that requests some service. These are the individuals who receive permission
to perform some action or request a service. A securable is an object on which some
permission is granted. For example, in Exercise 7.1, the SalesManager group is a principal
that received the Select permission on the HumanResources.JobCandidate securable.
To apply a permission to an object, or to remove it, the administrator uses the Data Control
Language (DCL) commands. There are many DCL commands that are used to manage
access to SQL Server. The three DCL commands related to permissions are the following
commands:

CERTIFICATION READY?
You elect to apply the
REVOKE permissions
to all objects in your
database to user Nancy.
What are her effective
permissions for these
objects?

CERTIFICATION READY?
Know the difference
between GRANT,
REVOKE, and DENY.

GRANT. The GRANT command adds the permissions listed to the permission list
of the affected user or role. This command is used when you wish to let a principal
receive new permissions. In Exercise 7.1, you added the Select permission on the
HumanResources.JobCandidate table to the SalesManager role.
You can use the WITH GRANT option with this command. Doing so allows the
principal that receives the permission to in turn assign it to others.
REVOKE. REVOKE is the opposite of GRANT. It removes a permission on an object
from a principal. You use this command when a particular principal no longer needs the
specified permission on the securable. The lack of a permission leaves the object in an indeterminate state: access has been neither granted nor denied. The interaction of a user and a
role must resolve this ambiguity. If the ambiguity remains, Microsoft denies access.
There are two options with this command. The first parameter is GRANT OPTION,
which removes the specified principals ability to grant permissions. This doesnt affect
the permission set for the principal on the securable, but it prevents the principal from
assigning permissions to other principals.
The second parameter is CASCADE, which revokes the permission from the user as well
as any users to whom they have subsequently granted permissions.
DENY. The last permission command is DENY, which prevents the principal from
having the permission specified on the securable. Unlike REVOKE, this command doesnt
require permission on the object to have been previously granted. Instead, this command
prevents access whether the principal currently has the permission or is assigned it in the
future. The DENY permission overrides any other permission assignments.
This command is often used when overall permissions are granted to a role, but specific
securables contained in that role must not be accessed by a subset of the role members.
For example, suppose that all the sales managers should be allowed to SELECT from the
Production.ProductInventory table, but the junior managers (Bob and Steve) shouldnt
be allowed to UPDATE this table. Rather than creating a separate role for one group of
sales managers, you can use the DENY command to remove the ability to update data
from those two users only.
Using GRANT and DENY lets you easily implement the permission policy that fits your
business needs. As discussed earlier, using roles at the highest level ensures that your policy is
easy to administer. Using GRANT to assign permissions at this level and DENY to selectively
exclude individuals is the most efficient way to manage permissions. As your schema and
business needs change, you can use REVOKE to remove permissions that no longer apply.

Designing SQL Server Object-Level Security | 153

Applying Specific Permissions


The particular permissions you apply will depend on the business needs of your
applications and database. The permissions that are applicable to objects are listed next,
along with a brief description to assist you in deciding which permissions should apply
to different types of objects.

You need to understand that the permission sets are hierarchical in nature. Permissions on a
container object, such as a database or schema, imply permissions on the objects contained
inside, such as the schemas inside the database or the objects inside a schema:
Alter. The Alter permission applied to individual objects includes all permissions except
the ability to change ownership. These permissions include the ability to create and drop
objects. If granted on a scope, such as schema, the principal can create or drop objects
within the scope.
Alter Any. This form of the Alter permission applies to either server-level or
database-level securables, such as logins or users, respectively. The principal receiving this
permission can create, alter, or drop any securable in the scope.
Backup. The Backup permission supersedes the Dump permission and allows the
principal to perform a backup on the database.
Control. The Control permission is equivalent to assigning ownership of the securables.
All available permissions are granted to the principal, and the principal in turn can grant
those permissions to others.
Create. The Create permission allows the principal to create new objects of the type
specified in the assigned scope, server, or database.
Delete. The Delete permission lets the principal user remove data from tables, views, or
synonyms.
Execute. The Execute permission allows the principal to invoke stored procedures,
functions, or synonyms.
Impersonate. The Impersonate permission can be granted at the login or user level and
allows the principal to change their security context to that of the assigned user or login.
Insert. The Insert permission applies to tables, views, and synonyms and allows the
addition of data to those objects.
Receive. The Receive permission allows (or denies) communication in the Service
Broker queue. Also: CLR Common Language Runtime; ACL Access Control List.
References. The References permission is required to access another object for the purpose of verifying a primary- or foreign-key relationship. This applies to scalar and aggregate functions, the Service Broker, tables, views, synonyms, and table-valued functions.
Restore. The Restore permission supersedes the Load permission and allows a backup to
be applied to a database in a restore operation.
Select. The Select permission lets a principal query a particular object and return the
data from a table, view, table-valued function, CLR function, or synonym.
Take Ownership. This permission is similar to the Windows ACL permission and allows
the principal to change ownership to themselves for the objects on which its granted.
Update. The Update permission confers the ability to change the individual values of
data in tables, views, and synonyms.
View Definition. This permission is new to SQL Server 2005 and provides more
granular control by allowing access to the metadata about a particular class of object.
Without this permission, the metadata definition of an object isnt available.
The granularity for most of these permissions is the individual object level with the exception
of the Select, Insert, Update, Delete, and References permissions. These permissions can be
assigned to individual columns if your business needs dictate this capability.

154 | Lesson 7

Analyzing Existing Permissions

THE BOTTOM LINE

Before you implement the policy youve set up and make changes, you should examine
the existing structure of permissions to ensure that any changes you require wont result in
application issues. You also need to determine if there are conflicts with your policy and the
existing permissions and mitigate these issues.

If you arent setting up a new application or database, then you probably have existing
permissions in your database(s). You must examine two different types of object permissions:
the specific object-access permissions such as Select, Insert, Update, Delete, Execute, and
others that are used for accessing data; and the controlling permissions, such as Alter, Create,
Drop, and similar commands that convey control over an object.
The most common types of permissions are the data-access permissions granted on individual
objects. These permissions are shown in Table 7-1 along with the objects to which they can be
granted. The best practice is to assign these permissions to roles only, not to individual users.
Table 7-1
Object permissions

TAKE NOTE

P ERMISSION

OBJECTS

Select

Tables, views, table-valued functions, CLR functions, and synonyms

Insert

Tables, views, and synonyms

Update

Tables, views, and synonyms

Delete

Tables, views, and synonyms

References

Scalar and aggregate functions, table-valued functions, Service


Broker queries, tables, views, and synonyms

Execute

Procedures, scalar and aggregate functions, synonyms

Receive

Service Broker queries

View Definition

Scalar and aggregate functions, Service Broker queries, tables,


views, and synonyms

TO

WHICH IT APPLIES

When youre assigning object permissions, include the security choices with the CREATE
or ALTER scripts used to develop the objects.

Unfortunately, SQL Server Management Studio doesnt include any easy tools to see all the
object permissions for a role. However, a great deal of information is available using functions
and views. You can determine the permissions for a role or a user by executing the fn_my_
permissions function under the context of the user or role. Changing the execution context of
the current user or login is discussed in the next section.
CERTIFICATION READY?
Know the different object
permissions and the
object types to which
they can apply. Can
Execute be applied to
a Service Broker query?
How about the View
permission?

The process of reconciling the existing permission set against the policy that has been determined is tedious. Its best tackled on a role-by-role basis, working from the new policy and
checking the permissions for the objects and securables against what is currently assigned. As
you identify missing permissions, you can use the GRANT command to add them to either
new or existing roles as specified in the policy.
If you encounter permissions that are currently granted on securables but shouldnt be in
the new security policy, you have a choice of how to proceed. If the permissions are at a
gross level, meaning an entire role currently has permissions it shouldnt, then you can use

Designing SQL Server Object-Level Security | 155

REVOKE to remove these permissions. If this situation is at a lower level, such as a user or
users who should have permissions separate from the larger role, then you can create a role for
this subset that contains the DENY permission at the appropriate level.

Specifying the Execution Context


THE BOTTOM LINE

Execution context specifies the way in which permissions are checked on statements and
objects.

By default, when a login connects to a SQL Server database, that login is used to determine
which permissions the user is assigned and which objects that user can access. However, SQL
Server includes the ability to change your context to that of another user, enabling you to
receive additionalor, potentially, reducedpermissions for the batches you execute.
Previous versions of SQL Server provided a limited ability to change your execution context
with the SETUSER command. This was a handy tool for system administrators, but it wasnt
useful for the average login because it was limited to sysadmin or db_owner roles. Because sometimes you need to escalate permissions for a single function, SQL Server has the EXECUTE AS
statement that allows for permission escalation by temporarily changing the user context.

TAKE NOTE

Microsoft has deprecated SETUSER, so your policy should recommend that it be replaced
in code wherever possible.

This command can be used in two cases, and you should address both with decisions
governing its use. In the first case, a particular function or stored procedure executes in a
specific logins context. The second case is for longer batches or sessions where a series of
commands are executed as another user. These two situations are discussed next.

Implementing EXECUTE AS for an Object


The first case for switching your execution context occurs when an object contains the
EXECUTE AS statement as part of its definition. The object can be a stored procedure,
a function, a Service Broker queue, or a trigger. Each of these objects can have the execution context specified in one of four ways:

EXECUTE AS CALLER. The behavior of an object is just as it is in previous versions


of SQL Server when this is specified. The execution context of the module being called
is set to that of the caller or login invoking the module.
In this situation, the permissions are checked on the module and its referenced objects
using the security token of the login or user executing the module. No additional
permissions are added to or removed from the session.
EXECUTE AS <user_name>. In this case, <user_name> should be replaced with the
name of a database user. When the module is invoked, the permissions for the caller are
checked only to ascertain whether the caller can execute the stored procedure. After that,
permissions for any objects inside the module, whether in the same ownership chain as
the module or not, are checked against the username specified, not the caller. This allows
you to specify permissions for the module that may be different than those of the caller
or any other user in the database.

156 | Lesson 7

EXECUTE AS SELF. This context is similar to the EXECUTE AS <user_name> context,


but it uses the context of the user creating or altering the module. SELF, in this case,
applies to the user that is executing the CREATE or ALTER statement on the module.
As an example, Steve is creating the NewSchema.MyProcedure stored procedure. The
code is as follows:
CREATE PROCEDURE NewSchema.MyProcedure
WITH EXECUTE AS SELF
AS
SELECT * FROM Steve.MyTable

CERTIFICATION READY?
Know the forms of the
EXECUTE AS command
and be prepared to
identify how the use of
this command would
alter the execution
context.

TAKE NOTE

Steve then grants Dean permission to execute this stored procedure. When Dean executes it, permissions are checked to be sure he can execute the module, but the permissions check on Steve.MyTable uses Steves permission set.
EXECUTE AS OWNER. This context uses the permission set of the module owner for
all objects referenced in the module. If the module doesnt have an owner, the owner of
the modules schema is used instead.
This is similar to EXECUTE AS SELF if the person creating the module is the same as
the owner at execution time. However, because object ownership can be changed, this
context allows the permission check to move to the new owner of the object.

In the three cases where execution context is set to a particular username, that user cant be
dropped until the execution context is changed.

Case Study: Developing an EXECUTE AS Policy for an Object


These are all powerful features that allow you to temporarily assign a different set of
permissions to a user by allowing them to execute a module. These permissions dont
carry throughfor example, executing a permission on a module calling the Sales table
doesnt grant permission to access the Sales table.
This limitation is useful when you want to let users access cross-schema objects, but you
dont want to grant them explicit rights. Just as with schemas, implications exist that
can cause issues in administering security.
Because users tend to change more often than permissions or objects, you use techniques that allow for this flexibility. In assigning permissions, you use groups and roles
to collect users together for easy administration. Starting with the 2005 version of SQL
Server the concept of a schema has been available. The schema separates object ownership from individual users for the same reason. And this should caution you against
using a particular user or SELF to change execution context: Because a one-to-one
mapping exists between the user and a module, if the user needs to be dropped, every
module must be altered to change the execution context. This is the same administrative
issue with users both owning an object and being its schema.
Instead, if you need to grant temporary permissions, the EXECUTE AS OWNER
statement is the best choice if the permissions for the owner are set appropriately for the
referenced objects. However, this can still cause issues if the administrator doesnt want
an objects owner to have the extended permissions.
The best policy you can implement is to create specific users that are in a role expressly
created to meet your permissions needs. These users shouldnt map to a user login, but
rather should exist only to execute the modules requiring special permissions.
If you think your environment is static enough to use individual users, then EXECUTE
AS is a good way to change permissions in only one module.

Designing SQL Server Object-Level Security | 157

Implementing EXECUTE AS in Batches


You can also use the EXECUTE AS statement as a stand-alone command that changes
the entire context of the session to that of the user specified. In this case, the use of the
statement is as follows:

USE MyDB
GO
EXECUTE AS USER = 'Kendall'
SELECT * FROM dbo.MyTable
. . . (more statements)

If Steve logged on to the SQL Server, the first statement sets the database context. The
next line changes the security context to that of Kendall. From that point forward, all the
statements that are executed have their permissions checked as if Kendall had logged on and
were executing them.
To use this statement, the calling login or user must have Impersonate permissions on the login
or user named in the command. This entails assigning that permission to those users or logins
that will need to change their context. As with other permissions, a database role is the best vehicle for assigning permissions for user-level context switches. For login-level changes, you must
use individual permissions, although you can use Windows groups if you have Windows logins.
All statements are executed with this new security context until one of the following events
occurs:
The session ends.
Another EXECUTE AS statement is run.
The REVERT command is issued.
The behavior of this command is similar to that of a trigger in that the calls to EXECUTE
AS nest themselves on a stack. The REVERT command returns you to the previous execution
context by default. Youll look at the REVERT command in more detail after examining the
options for EXECUTE AS.
When you change execution context with EXECUTE AS, a few options are available that can
give you more control over the security of your data:
Scope of the EXECUTE AS Statements. You can change context in one of two ways: at the
user level or at the login level. By choosing one of these, you define the scope of the impersonation that takes place. A login is a security object that exists at the server level, covering all
databases as fixed server roles. If you change your context to that of a login, then its as if you
logged on to the server as that user; the settings follow even if you change databases.
The syntax for this option is as follows:
EXECUTE AS LOGIN = <login name>

The user scope is only within the database in which its invoked and the user exists. This
prevents the inadvertent granting of permissions in other databases that the impersonated user
may have rights to access. This extends to USE <database> statements and linked server or
other distributed queries as well. Any of these cross-database statements will fail.
The syntax for this option is as follows:
EXECUTE AS USER = <user name>

NO REVERT. The NO REVERT option prevents the return of execution context to the
previous user or login. This is similar to an application role in that the session remains in the
context of the new user until the session is dropped. In essence, this command clears the stack
of execution contexts and prevents the return to any prior context.
If the REVERT command is run after this option has been specified, it has no effect. To
invoke this option, use the following syntax:
EXECUTE AS USER = <user name> WITH NO REVERT

158 | Lesson 7

NO REVERT COOKIE. As with many things in SQL Server, there is an exception to the
NO REVERT option. The execution context can be stored in a varbinary variable and used
to return to the previous execution context. This option allows the client to maintain the data
needed to restore the previous context.
The syntax to invoke this option is as follows:
DECLARE @cookie VARBINARY(100);
EXECUTE AS USER = <user name>
WITH NO REVERT
COOKIE = @cookie;
GO

To restore the context, you execute the following, assuming the @cookie variable contains the
correct cookie from the previous statement:
REVERT WITH COOKIE = @cookie

This type of statement ensures that the execution context can be reversed only by a client that
knows the correct cookie value. In connection-pooling statements, this can prevent another
client from changing its context without knowing this value.

Auditing
Its important to consider the auditing aspects of changing context. Once users change
their context, many of the functions associated with auditing return the name of the
new context, not the original login. The debate over which login or username should be
recorded may be philosophical, but the requirements of many enterprises dictate that the
auditing should be traceable back to an actual physical user, not an account.

Fortunately, there are a few ways in which you can access the underlying information about the
original authenticated user. The ORIGINAL_LOGIN() function returns the name of original
login (either Windows or SQL Server authenticated). This can be used in place of the USER_
NAME() or SUSER_SNAME() functions, which return the current context, not the original user.
If youre using Profiler to trace the execution of an application, Profiler includes a column
named SessionLoginName, which isnt visible or selected by default. This column contains
the value of the original login to the server; to see it, check the Show all columns check box
on the Events Selection tab, as shown in Figure 7-1.
Figure 7-1
Events Selection tab of the
Trace Properties dialog box

Designing SQL Server Object-Level Security | 159

The default Profiler selections of NTLoginName and LoginName will change depending on
the context switch. This can lead to problems in providing a well-defined auditing trail.

LAB EXERCISE

Perform Exercise 7.2 in your lab


manual.

In Exercise 7.2, youll change the execution context.

Developing an EXECUTE AS Policy for Batches


Using this command is much different from using it as part of a module. A user who
can change context can execute any command that the new context has the appropriate
permissions to run. Similar to the use of ad hoc SQL or dynamic SQL with EXEC(),
this has the potential to be a large security risk. An open-ended list of possible commands that can be run invites the possibility of a vulnerability or misconfigured permission set being left available for malicious or inadvertent use.

In general, the use of EXECUTE AS for batch queries is best suited to testing and simulation
by administrators and developers. By changing context to that of a regular user, a developer
can easily and quickly determine whether the application will perform properly for a real
user. The permissions are checked, procedures executed, and data queries return just as if the
developer had logged on as that user.
Because the developer can quickly change back to their own context to make a change with
REVERT, this greatly speeds the development process without requiring tedious switching of
windows or applications, logging off and back on, or any of the previous techniques. Because
many companies use Windows Authentication, this also allows the developer to simulate
other logins on a single machine as different users.

TAKE NOTE

REF

You examined application


roles in Lesson 6.

The EXECUTE AS statement works only with logins. It doesnt work with groups, roles,
certificates, or any built-in accounts such as Local System, Local Service, or Network
Service.

Another way to use this feature in your security policy is to enable users to change context
with the database scope and create your own type of application role. By using the WITH
NO REVERT option, you can duplicate the functionality of the application role. You can
allow individual users to log on to SQL Server for auditing purposes; but by preventing them
from accessing any objects, you can force them to use the EXECUTE AS statement to obtain
permissions to query data.
Unlike with an application role, you can allow users to switch back to their original context.
This can be useful if you have more than one application and wish to let users switch between
them with different contexts without dropping their sessions.
The final place you should use the EXECUTE AS statement is in checking your permission
policy. A system administrator can use this command to impersonate any other users and
check which objects they can access and which they cant. This is the best way to ensure that
your permissions policy is correctly designed and implemented.

Specifying Column-Level Encryption


THE BOTTOM LINE

The use of encryption creates a large load on your database servers processor because complex
calculations are required to both encrypt and decrypt data. You have two ways to limit the
processing load on the server: Choose keys wisely, and limit the deployment of encryption.

160 | Lesson 7

REF

Lesson 5 discussed the


overall encryption policy.

Choosing Keys
The key choice has two parts: the type of key and the use of the key.

As mentioned in Lesson 5, a variety of algorithms can be chosen for keys. These are divided
into two types, symmetric and asymmetric, as well as different lengths, commonly specified as
the number of bits in the key. Longer keys are more secure, but they require more processing
power. The general policy regarding key length is to choose the longest length you can, given
the processing capability your server can handle.
The algorithms have advantages and disadvantages that are beyond the scope of this text. If
you dont have a regulatory requirement for a particular algorithm, you should research them
to determine which is best suited for your application.
However, the use of each algorithm can greatly affect the performance of your server. The
recommendation is that you use an asymmetric key to secure the symmetric keys, which
in turn encrypt the data. This means a symmetric key is used to perform the encryption
and decryption of your data, because its faster with less of a load on the servers processor.
This key is in turn encrypted by an asymmetric key, which is more secure but requires
greater processing power to perform the encryption operations. The use of certificates
or server-created asymmetric keys is a decision you should make based on your existing
infrastructure. Certificates are more complex and require more administration because they
have expiration dates. Some applications, however, require certificates.
One other method of encryption is available, but the keys arent maintained by SQL Server.
If you use the ENCRYPTBYPASSPHRASE() function, then you supply a password that will
be used to encrypt or decrypt the data. In this case, the user or application must supply this
passphrase or password every time an encryption or decryption operation takes place. Because
the keys arent secured or recoverable, this method of encryption isnt recommended.

Deploying Encryption
The second aspect of encryption that you can control is the overall scale of how widely
it will be deployed in your database. The more columns you choose to encrypt, the more
processing power will be required both to store the data in a cipher format and to decrypt
the data each time that column is used in a query.

In addition to the processing requirements, encrypting data involves a few other performance
issues. Columns that are encrypted cant be efficiently indexed or searched using an index.
This means that primary keys, foreign keys, and columns that will be indexes for heavily used
queries shouldnt be encrypted. This requires careful consideration because many times the
column that you want to encrypt contains things like SSNs, credit card numbers, and so on
that you want to use as keys.
Both of these reasons should limit the amount of data you encrypt in your database. As you
seek to design an effective encryption scheme for your application, the meaning that can be
gleaned from unencrypted information should be carefully analyzed. Sometimes, encrypting
just a few sensitive columns can provide enough security without adversely affecting
performance.
For example, if you have a table of information that stores employees names, titles, and
annual salaries, it doesnt make sense to encrypt the entire table. If just the salary is encrypted,
then someone reading the table cant determine an individuals salary, and the table can still be
easily indexed and searched by name or title.
However, if you choose to encrypt the name, then assuming this isnt a key field, all the salary
values can be mapped to titles. This approach will disclose to anyone who can read the table a

Designing SQL Server Object-Level Security | 161

CERTIFICATION READY?
Is a certificate an example
of an asymmetric key or a
symmetric key?

LAB EXERCISE

Perform Exercise 7.3 in your


lab manual.

great deal of information about other employees based on their title. Similarly, if you encrypt
the titles, each persons name can be easily matched with a salary.
Determining which fields to encrypt is a difficult decision that each administrator must make
based on the actual tables and the data contained in each column. Only by analyzing your
situation will you be able to decide which fields need to be encrypted and balance that against
the performance penalties of using encryption.
In Exercise 7.3, youll encrypt a column of data two different ways.

Using CLR Security


THE BOTTOM LINE

Common language runtime (CLR) is Microsofts runtime environment technology for


executing .NET program language code. CLR code can be used inside SQL Server and you
can create a wide variety of functions, stored procedures, and triggers that can meet virtually
any need in a T-SQL query.

One of the most interesting features for developers in SQL Server is the ability to code
modules in any .NET language and execute them in SQL Server. However, managing the
security of these objects is more important than ever, because the impact of these objects can
be seen in queries that may access millions of rows of data at a time.

LAB EXERCISE

Perform Exercise 7.4 in your lab


manual.

By default, the ability to execute CLR objects in SQL Server is turned off. An administrator
must make a conscious decision to enable this functionality so that .NET assemblies
registered on the server can be executed.
Exercise 7.4 will walk you through enabling the CLR environment in SQL Server.

Creating Assemblies
Once CLR usage is enabled, then you can deploy or install .NET assemblies on the SQL
Server instance and create objects to use the methods inside these assemblies.

REF

In Lesson 6, you examined the security levels


for .NET assemblies,
which can be set at SAFE,
EXTERNAL_ACCESS,
or UNSAFE.

These assemblies can be deployed automatically using Visual Studio .NET 2005/2008
or copied and manually added to SQL Server by an administrator. The ability to add an
assembly to SQL Server, however, requires the Create Assembly permission in either case.
This is similar to the ability to create a stored procedure or function. As previously noted,
only sysadmins can create assemblies with an UNSAFE permission set due to the lack of
restrictions on what code is called.
Integrating assemblies into SQL Server is slightly more complex than doing so with
procedures or functions because of the nature of a .NET assembly. Each assembly can call
other assemblies; as a result, SQL Server must also load those referenced assemblies. If they
dont exist, then SQL Server will load them as well.
If the assemblies already exist, then the same user or role must own the assembly, and the
referenced assembly must have been created in the same database. If not, then the creation of
the assembly will fail.

162 | Lesson 7

One other note about security when creating assemblies: Because theyre loaded from the file
system, the SQL Server must be able to access the files on the file system.

Accessing External Resources


As previously mentioned, a .NET assemblys ability to access a resource outside of SQL
Server is governed by the permission set assigned when the assembly is created. If a permission set isnt given, it defaults to the SAFE level. Therefore, an assembly must explicitly
be assigned the EXTERNAL_ACCESS or UNSAFE permission set to access resources
outside SQL Server.

When this access occurs, it uses the permission set of the SQL Server service account unless
you use special programming techniques to enable impersonation of another account. These
advanced techniques require calls to the SqlContext.WindowsIdentity API. Consult the .NET
SDK for more information on this topic.
If you use impersonation, the calls must be out of process if they require data access. This
ensures the stability of the process and enables secure data access.
There are other restrictions on EXTERNAL_ACCESS. If an assembly attempts to access
external resources, and it was called by a SQL Server login, the access is blocked and an
exception is thrown. This also occurs if the caller isnt the original caller. If the caller is a
Windows login and the original caller of the module, then the security context of the SQL
Server service is used, as mentioned at the beginning of this section.

ENABLING TRUSTED ASSEMBLIES


As your applications integrate .NET assemblies, its likely that some assemblies will need
to reference each other. To do this, the assemblies must be fully trusted if they signed with
a strong namethat is, unless theyve been marked with the AllowPartiallyTrusted-Callers
attribute, which lets assemblies call each other without requiring them to be fully trusted.
USING APPLICATION DOMAINS
An application domain inside the SQL Server CLR environment is a bounded environment
inside of which a .NET module executes. This provides isolation between assemblies and their
internal structures.
However, if two or more assemblies are loaded that belong to the same owner, theyre loaded
in the same application domain. This enables the assemblies to discover each other at runtime
through reflection and to call each other in a late-bound fashion. Permissions arent checked
when assemblies call each other this way.

USING MODULE SIGNING


Modules in SQL Server, such as stored procedures, functions, triggers, and assemblies, can be
signed with a digital signature from a certificate. Users who are assigned permissions to use
this module can use the public key to decrypt the module and execute it.
This signing provides a way to temporarily grant greater privileges to a user or role without
switching the execution context. When a module that is signed is executed, the permissions
of the signer are added to the permissions of the caller only for the duration of the modules
execution. Thus a module with a specific function can perform an action without granting
additional rights to the user, ensuring that the code isnt changed.
An example is a module that accesses the list of SPIDs running on the server and computes
the blocking chain to determine the root blocker and the affected SPIDs. This is a complex
query in T-SQL, but a .NET assembly can easily compute the result. However, because it

Designing SQL Server Object-Level Security | 163

requires access to a system resource under a different ownership chain than the module,
unnecessary rights must be granted for this approach to work. Using module signing, you
can create a module that accesses the system resources under a login assigned to a certificate.
The certificate login will have the permissions to access the system resources. When users or
roles receive permission to execute the module, they will receive the permissions to access the
system resources only as long as the module is being invoked. Once it completes, they will no
longer be able to access the system resources.

Developing a CLR Policy


The CLR is a complicated environment that allows access to resources inside and outside
SQL Server in many ways. It greatly increases SQL Servers capabilities and allows new
ways of working with data that were never possible before. It also allows unprecedented
access to the file system, host system resources, and network applications.

As such, its important that you develop a strong policy to guide how CLR objects are
integrated into your SQL Server environment to guarantee a secure system. Your policy
should consist of three parts to ensure that you maintain control over the assemblies you
integrate into your database environment. (This assumes youll allow the CLR to be enabled
on your server. If not, then that is the policy you should set.)
The first part of your policy uses GRANT, REVOKE, and DENY permissions you apply
to the assemblies to allow objects to be created using them. Because this policy affects the
functions and modules when theyre executed, be sure you limit the rights to execute these
functions to those who need to do so. If one group uses a CLR module to load data from a
web service, dont grant rights to execute the module to all your users. Create a role, and limit
the execution rights to that group.
The second part of your policy deals with the assemblys permission set. You should assign
the minimum set of permissions necessary when the module is created. Unless an assembly
truly needs to access resources outside the server, it should have the SAFE permission set
applied. Stringent requirements should be met before you create UNSAFE-level assemblies.
Your policy for developing these types of assemblies should have guidelines for what types
of functions will be granted permissions other than the SAFE level; that level should be the
default for all assemblies unless the need for the other levels is proven.
For those modules that access outside resources, youll want to make sure they will work
correctly in a multirow result set. For modules that are designed to work on a single row or a
few fields of data, you may wish to set a policy to prevent their use in queries or updates that
will affect large sets of data.
In addition to setting the permission-set level, you should also be sure youre aware of the
interactions between assemblies. Utility assemblies contain code required by other assemblies,
but these need to be owned by the same user or schema and in the same database for the
assemblies to reference each other. If they arent, the code must be restructured to exist inside
a single assembly. The policy you develop should specify these restrictions to ensure the
resulting application functions as expected when deployed.
In setting this policy, you should also limit who has the right to add assemblies. Ideally, only
administrators should be allowed to create assemblies; if you assign rights to other users, be
sure that its a limited group and that each assembly added is carefully documented as to its
purpose and use (especially if it doesnt use the SAFE permission level).
The last part of your policy that you need to design is the code policy for the assembly.
Although its likely that code standards for .NET development exist in most environments,
these assemblies will be called from within SQL Server and potentially called many times for

164 | Lesson 7

a single query. The assembly must be built to handle the stresses of executing inside the server
environment and should be coded to ensure that it doesnt affect the stability of SQL Server.
Your environment may specify requirements for performance or load to ensure an assembly
doesnt slow the server or adversely affect performance. Because these assemblies will act on
large sets of data that will be used for updates, reports, and business decisions, the modules
need to be thoroughly examined for accuracy in addition to performance.
For assemblies that access resources outside SQL Server, make sure the work being performed
can complete in a timely manner and not affect the performance of other queries or cause
instability with the SQL Server instance. This is especially important if any modules work
directly with memory or the servers configuration.

S K I L L S U M M A RY
The security in your database is the final level of protection for your data. After logging in,
mapping to a user, and receiving the security tokens for any role memberships, the Select,
Insert, Update, Delete, Execute, and other T-SQL permissions applied to objects must be set to
meet your business requirements, but in as limited a way as possible.
Developing a strong policy is important for maintaining the security of your data, but it must
be applied to be effective. Your policy should be analyzed against the existing permissions and
any deficiencies brought into compliance. A limited number of exceptions may be required
because of preexisting applications or requirements, but they should be kept to a minimum.
SQL Server provides the ability of users to impersonate others and execute commands in a
security context other than their own. A secure SQL Server instance ensures that this is
controlled and limited to those cases where its truly needed. The impersonation capabilities
also can affect auditing systems and capabilities, so these functions should be examined to
guarantee that they still perform the functions that are required.
Encryption is a wonderful way to secure your data, but it brings with it a number of performance trade-offs. A good policy will find a way to balance the secure control of your data
with the need to meet performance goals.
The CLR integration into SQL Server is an incredible capability that provides almost unlimited
ways to manipulate and analyze your data. However, it can drastically reduce the security
of your instance if controls arent developed around the assemblies you allow on your
server. This is especially true of any CLR access outside of the SQL Server instance. A strong
policy should be developed early on to eliminate the introduction of new points of attack or
instability on the database server.
For the certification examination:

Understand how to design a permissions strategy. You should know the different parts of
the permissions strategy and how to structure your policy in order to meet your security
needs.

Know the permission-assignment commands. Be sure that you understand the meanings
and use of GRANT, REVOKE, and DENY.

Understand how to analyze existing permissions. You should be able to analyze existing
permissions and reconcile these against your security permission policy.

Understand execution context. Be sure that you understand how to determine and change
your execution context and the implications of doing so.

Know the implications of using the CLR. The CLR environment changes the capabilities of
SQL Server. You should understand the potential impact of these capabilities.

Designing SQL Server Object-Level Security | 165

Knowledge Assessment
Case Study
Jacks Steamed Shrimp
Jacks Steamed Shrimp is a small food chain that specializes in steamed seafood and has
a number of locations in seaside resort towns. Each location has outside tables for diners
to enjoy and a thriving delivery business to the surrounding areas. The company has
grown to more than 20 locations and 250 employees.
Jacks has developed two applications internally that are used to run the day-to-day
operations of the business. One is designed to handle the point of sale (POS) for the
food sales, both in-store and telephone orders for delivery. The other maintains the
inventory and food orders required to ensure that none of the locations run out of
supplies.

Planned Changes
New applications are being rolled out to replace existing applications that currently run
on the same database. Some schema changes will be included to extend functionality,
but a number of the old objects in both the Inventory and Sales databases will be
maintained.
Some new users and roles will be required, but the existing users will be reassigned new
permissions based on a policy developed for the new applications. Each database will
have two new roles: AppUsers and AppAdmins, with permissions assigned based on the
capabilities of the users.

Existing Data Environment


There are two databases on a single SQL Server 2005 instance: Sales and Inventory.
Each database supports an application that is currently being replaced with an upgraded
version.
Currently, there is a single role in each database that includes each employees login
account. Because the terminals running the point-of-sale application are shared, each
employee has a SQL Server login that they use to verify their identity to the application
and server.
The point-of-sale database contains two schemas: the POS schema for the data-entry
tables used in orders and the Supervisor schema for data that is aggregated from the
orders for sales planning and forecasting.
The Inventory database contains a single schema, FoodStore, that contains all objects in
this database.

Existing Infrastructure
The SQL Server 2005 server runs on a Windows 2003 server. All employees have a
domain account for logging in to the applications and various terminals.
The SQL Server 2005 instance was installed with the default options.
Each clerk carries a keycard on which a digital certificate is stored; this certificate
uniquely identifies that employee.

Business Requirements
A number of enhancements have been written in CLR language to handle a few
complex business requirements.

166 | Lesson 7

A number of regular customers keep their credit card numbers on file, and these
must be encrypted.
The developers need to have rights to the tables used for data lookups on the
Inventory system. However, the junior developers shouldnt have rights to the
detailed inventory tables.

Technical Requirements
The inventory application uses a web service to gather data from business partners
and must access an Internet web server for this data.
Individual clerks need to be able to input the credit card numbers for clients, but
they shouldnt have access to the tables where this information is stored. Its decided
that the execution context for the stored procedures that insert the data should be
changed to sales manager.
A development user-defined role is set up in each database. This role is a member of
the db_datareader role.
The POS schema contains two CLR modules: GetSpecials, which builds customized
coupons for returning clients; and CalcRoute, which determines the driving
directions for deliveries.
The Inventory database contains a CLR module called OrderPredictor that
determines whether a product needs to be reordered when its called.

Multiple Choice
Circle the letter or letters that correspond to the best answer or answers.
Use the information in the previous case study to answer the following questions:
1. You create an assembly on the server for one of the new .NET modules given to you by
a developer. You create a function for this assembly and assign security rights, but it does
not seem to work. What is wrong?
a. .NET assemblies should be called directly, not through a function.
b. A stored procedure is used to access .NET assemblies.
c. The CLR subsystem is not enabled.
d. The module is not trusted.
2. Which level of permissions should be assigned when creating the assemblies that call the
web service?
a. SAFE
b. UNSAFE
c. EXTERNAL_ACCESS
d. REMOTE
3. You do not want to change the overall rights for the developers role because the senior
developers should be allowed to access the Inventory table. What rights should you
assign to the junior developers?
a. REVOKE SELECT ON INVENTORY
b. DENY SELECT ON INVENTORY
c. REMOVE SELECT ON INVENTORY
d. EXCEPTION SELECT ON INVENTORY
4. If developers build a module that will handle the proper casing of customer names
when they are queried for printing on the delivery labels, what permission set should be
assigned this module?
a. UNSAFE
b. EXTERNAL_ACCESS
c. SAFE
d. LOW

Designing SQL Server Object-Level Security | 167

5. One of the developers wants to use the CalcRoute module in the POS database from the
OrderPredictor module in the Inventory database to help give suppliers directions to the
locations. Can these two modules call each other?
a. Yes
b. No
6. There is a customer defaults table that contains three fields: the customer code, the
last order, and a credit card for automatic ordering. Which of these columns should be
encrypted for security?
a. All three
b. The last order and credit card number
c. The customer code and credit card number
d. The credit card number
7. One of your developers is building stored procedures on their test system that will
require elevated privileges for its execution. It is decided to elevate privileges for the
module to that of the owner of the procedure at execution time. What clause should
the developer use when creating the module on his test system to ensure it is deployed
correctly on the production system?
a. EXECUTE AS SELF
b. EXECUTE AS <developer user name>
c. EXECUTE AS OWNER
d. EXECUTE AS CALLER
8. The auditing company was granted temporary access to the POS.CustomerDefaults
table during tax season by issuing GRANT SELECT ON POS.CustomerDefaults TO
Auditors. To remove these permissions, what should you execute for the Auditors role?
a. REVOKE SELECT ON POS.CustomerDefaults
b. DENY SELECT ON POS.CustomerDefaults
c. REMOVE SELECT ON POS.CustomerDefaults
d. GRANT NONE ON POS.CustomerDefaults
9. You wish to allow the senior developer access to the POS.Products table and want to let
him give this permission to other developers without granting him any fixed database
roles. What clause should you use with the GRANT command to achieve this?
a. WITH CASCADE
b. WITH GRANT
c. WITH ALLOW
d. WITH OWNERSHIP
10. After a trial period, you realize that the developers should not have permissions on the
production POS database. So, you want to revoke permissions from the senior developer
along with any permissions he has granted to others. What clause should you use with
the REVOKE clause?
a. WITH REMOVE
b. WITH REVOKE
c. CASCADE
d. FROM ALL

LESSON

Designing a
Physical Database

L E S S O N S K I L L M AT R I X
TECHNOLOGY SKILL

EXAM OBJECTIVE

Modify an existing database design based on performance


and business requirements.

Foundational

Ensure that a database is normalized.

Foundational

Allow selected denormalization for performance purposes.

Foundational

Ensure that the database is documented and diagrammed.

Foundational

Design tables.
Decide if partitioning is appropriate.

Foundational

Specify primary and foreign keys.

Foundational

Specify column data types and constraints.

Foundational

Decide whether to persist computed columns.

Foundational

Specify physical location of tables, including filegroups


and a partitioning scheme.

Foundational

Design filegroups.

Foundational

Design filegroups for performance.

Foundational

Design filegroups for recoverability.

Foundational

Design filegroups for partitioning.

Foundational

Design index usage.

Foundational

Design indexes for faster data access.

Foundational

Design indexes to improve data modification.

Foundational

Specify physical placement of indexes.

Foundational

Design views.

168

Foundational

Foundational

Analyze business requirements.

Foundational

Choose the type of view.

Foundational

Specify row and column filtering.

Foundational

Designing a Physical Database | 169

KEY TERMS
constraint: A property assigned
to a table column that prevents
certain types of invalid data
values from being placed in the
column. For example, a UNIQUE or
PRIMARY KEY constraint prevents
you from inserting a value that is
a duplicate of an existing value,
a CHECK constraint prevents you
from inserting a value that does
not match a specified condition,
and NOT NULL prevents you from
leaving the column empty (NULL)
and requires the insertion of
some value.
database: A collection of
information, tables, and other
objects organized and presented
to serve a specific purpose,

such as searching, sorting, and


recombining data. Databases are
stored in files.
index: In a relational database, a
database object that provides
fast access to data in the
rows of a table, based on key
values. Indexes can also enforce
uniqueness on the rows in
a table. SQL Server supports
clustered and nonclustered
indexes. The primary key of a
table is automatically indexed.
In full-text search, a full-text
index stores information about
significant words and their
location within a given column.
object: An object is an allocated
region of storage; an object is

named; if the database structure


has a name, its an object.
Examples include database, table,
attribute, index, view, stored
procedure, trigger, etc.
table: A two-dimensional object,
which consists of rows and
columns, that stores data about
an entity modeled in a relational
database.
view: An object defined by a
SELECT statement that permits
seeing one or more columns from
one or more base tables. With
the exception of instantiated
views (indexed views), views
themselves do not store data.

Think about your house for a moment, then your office, classroom, gym locker, car, and
any other place you habitually haunt. These locations are full of objects you ownsuch
as clothes, food, DVDs, your copy of this textbook, tools, and so on. Most of your stuff
is probably at your home, but unless youre severely messy, its unlikely that you randomly
toss your stuff into your house and lose hope you can find it again later.

What you do is try to store your various objects in containers (such as cabinets, dressers, or
bookshelves). More than likely, you also keep similar objects together; for example, your dress
shirts are hung next to one another in the closet, your Star Trek videos are all neatly lined up
on a shelf in some sort of order, and so on.
Why do you organize your objects? Because if you didnt, you couldnt find them later, and
if you couldnt find them, you couldnt use them. If you cant use them, whats the point of
having them? If you dont know where an object is when you want it, youll spend a great deal
of unproductive time trying to find it. These principles also hold true with SQL Server.
SQL Server is full of tables, views, stored procedures, and other objects. When it comes to
your clothes, food, tools, and so on, you need containers to store themwith SQL Server,
those containers are databases.
It makes sense that before you begin creating objects, such as tables and views, you must
create the database that will contain those objects. In this Lesson, youll learn what you need
to do while creating, configuring, and administering databases in order to maximize their
performance. As with most tasks in the book, planning is the hard partbut the rewards of a
well-constructed database plan are well worth it.
Databases consist of up to three types of files: primary data files, secondary data files, and
transaction log files. The primary data files store user data and system objects that SQL Server
needs to access your database. The secondary data files store only user information and are
used to expand your database across multiple physical hard disks. The transaction log files
allow up-to-the-minute recoverability by keeping track of all data modifications made on the
system before theyre written to the data files.

170 | Lesson 8

Modifying a Database Design Based on


Performance and Business Requirements

THE BOTTOM LINE

You dont make decisions on how the database should be designed in a vacuum or based on
personal whimsy. Because youre in the process of building a database infrastructure, you
need to consider some critical issues: performance, and the users or organizations business
requirements.

A SQL Server database consists of a collection of tables that stores a specific set of structured
data. A table contains a collection of rows, also referred to as records, and columns, also
referred to as attributes. Each column in the table is designed to store a certain type of information; for example, dates, names, dollar amounts, and numbers.
Tables have several types of controls, such as constraints, triggers, defaults, and customized
user-data types, which are used to protect and guarantee the validity of the data. As youll
see later, tables can have indexes similar to those in books that help you find rows quickly. A
database can also contain procedures that use Transact-SQL or .NET Framework programming code to perform operations with the data. These operations include creating views that
provide customized access to table data or running user-defined functions that perform complex calculations on a subset of rows.

Planning a Database
The first step in designing and creating a database is to develop a plan. A plan serves two
purposes: It provides a guide to follow when implementing the database, and it serves as
a functional specification for the database after it has been implemented.

The nature and complexity of a database, and the process of planning it, can vary significantly. A database can be relatively simple and designed for use by a single person, or it can
be large and complex and designed, for example, to handle all the banking transactions for
thousands of clients. In the first case, the database design may be little more than a few notes
on some scratch paper. In the latter case, the design may be a formal document hundreds of
pages long that contains every possible detail about the database.
Regardless of the databases size and complexity, there some basic principles you should always
follow:
Gather information. Before creating a database, you need a good understanding of what
the database is for and what its expected to do. Is it a new database? Is it a modification of
an existing electronic one? Or is it intended to replace a paper-based or manually performed
information system?
By reviewing the background and any existing systems, paper or electronic, youll get most
of the information you need. Collect copies of customer statements, inventory lists, management reports, and any other documents that are part of the existing system, because these will
be useful to you in designing the database and the interfaces.
You should also review the business requirements of the database and organization and
make sure they coincide. Another key task is to interview the stakeholders and everyone else
involved in the system to determine what they do and what they need from the database.
Its also important to identify what they want the new system to do, and also to identify the
problems, limitations, and bottlenecks of any existing system. Your design should take advantage of every optimal opportunity and, at the same time, minimize the physical shortcomings
and bottlenecks that may exist in the system, at least until you can correct them.

Designing a Physical Database | 171

Inventory the objects. As part of your plan, you need to review the planned objects (and inventory those that exist, if youre modifying an existing database). You should do the following:

Identify the objects.


Model the objects.
Identify the types of information for each object.
Identify the relationships between objects.

Ensuring That a Database Is Normalized


Normalization is the process of taking all the data that will be stored in a database and
separating it into tables according to rigorous rules. Unless youre going to keep all your
data in a single table (not the best idea in most organizations), this is a decision-making
process. By defining ways in which tables can be structured, normalization helps you
come up with an efficient storage structure.

Efficient in this case doesnt mean minimum size. Efficiency refers to structuring the database
so that data stays organized and changes are easy to make without side effects. Minimizing
storage size is sometimes a product of normalization, but its not the main goal.
Normalization primarily acts to preserve the integrity of your data. No matter what operations are performed in your database, it should be as difficult as possible to insert meaningless
data. Normalization recognizes four types of integrity:
Entity integrity. Maintaining data consistency for each row (or instance) in the table.
This is often enforced with a unique identifier which can, but need not, be a primary key.
Domain integrity. Maintaining data consistency with a column (or attribute) in a table.
This is often enforced with validity checking (null or not null, range or value).
Referential integrity. Maintaining data consistency between columns in a table or
between tables in the database. This is often enforced with a foreign key.
User-defined integrity. Maintaining data consistency by defining specific rules that
do not fall into one of the other integrity categories. This is often enforced with stored
procedures and triggers.
Normalizing a logical database design involves using formal methods to separate the data into
multiple related tables. Several narrow tables with fewer columns are characteristic of a normalized
database. A few wide tables with more columns are characteristic of a non-normalized database.
Some of the benefits of normalization include the following:

Faster sorting and index creation.


A larger number of clustered indexes.
Narrower and more compact indexes.
Fewer indexes per table. This improves the performance of the INSERT, UPDATE, and
DELETE statements.
Fewer null values and less opportunity for inconsistency, which increases database
compactness.
Fewer situations in which a single piece of data is stored in multiple and thus redundant
locations.

Allowing Selected Denormalization for Performance Purposes


Even though youll usually normalize your database, there will be occasions when you
need to denormalize it. However, you should start with the idea that you should never
denormalize your data without a specific business reason for doing so. Careless denor-

172 | Lesson 8

malization can ruin the integrity of your data and lead to slower performanceif you
denormalize too far, you end up including many extra fields in each table, and it takes
time to move that extra data from one place in your application to another.

The principal goal of normalization is to remove redundancy from the data. By contrast,
denormalization deliberately introduces redundancy into your data. Theoretically, you should
never denormalize data. However, in the real world, things arent always that simple, and you
may need to denormalize data to improve performance. For example, if you have an overnormalized database, it can slow down the database server because of the number of joins that
must be performed to retrieve data from multiple tables.

TAKE NOTE

When youre forced to denormalize data for performance, make sure you document your
decision so that another developer doesnt think you made a mistake.

No hard and fast rules tell you exactly how (or whether) to denormalize tables in all circumstances, but you can follow these basic guidelines:
If your normalized data model produces tables with multipart primary keys, particularly
if those keys include four or more columns and are used in joins with other tables, you
should consider denormalizing the data.
If producing calculated values such as maximum historic prices involves complex queries with many joins, you should consider denormalizing the data by adding calculated
columns to your tables to hold these values. SQL Server supports defining calculated
columns as part of a table, as youll see shortly.
If your database contains extremely large tables, you should consider denormalizing the
data by creating multiple redundant tables instead. You may do this either by column
or by row. For example, if the Medications table contains many columns, and some of
these (such as patent date) are infrequently used, it will help performance to move the
less frequently used columns to a separate table. With the volume of the main table
reduced, access to this data will be faster. If the Medication table is worldwide, but most
queries require information about medications from only one region, you can speed up
the queries by creating separate tables for each region.
If data is no longer live and is being used for archiving, or is otherwise read-only, denormalizing by storing calculated values in columns can make certain queries run faster.
If queries on a single table frequently use only one column from a second table, consider
including a copy of that single column in the first table.

Ensuring That the Database Is Documented and Diagrammed


If you have had any experience with IT or know anyone in the field, the most common
complaint is the lack of documentation. With respect to the databases in SQL Server,
documentation takes two forms. The first is traditional documentation, which is basically a written record of the code to supplement your plans. Ideally, the code is well
annotated. The second way is to document a database visually through a diagram that
shows the relationships between the parts of the database. Most DBAs and developers
find creating either kind of documentation tedious and are exceptionally creative when it
comes to finding excuses to move those tasks to the bottom of the priority pile.

As youll see, Microsoft has made the documentation and diagramming processes less painful
and designed them so that you can use some of the processes to modify your database.

Designing a Physical Database | 173

LAB EXERCISE

Perform Exercises 8.1 and 8.2


in your lab manual.

DOCUMENTING THE SCHEMA


Starting with SQL Server 2005, you can document an existing database structure, called a
schema, by generating one or more SQL scripts. You can view a SQL script by using the
Management Studio Query Editor or any text editor. In Exercise 8.1, youll document the
AdventureWorks database schema.
You can use the database schema generated as an SQL script for the following tasks:

TAKE NOTE

Be aware that you


can also use Database
Diagram Designer when
youre designing (or
modifying) a database
to create, edit, or delete
tables, columns, keys,
indexes, relationships,
and constraints.

Maintaining a backup script that lets the user re-create all users, groups, logins, and
permissions
Creating or updating database development code
Creating a test or development environment from an existing schema
Training newly hired employees
Diagramming a Database Structure
Back when electronic databases were young, the only way you could diagram your database
was by sketching it on a piece of paper or drawing it on a blackboard. Thankfully, those days
are gone. SQL Server ships with a tool called Database Diagram Designer that allows you to
design and visualize a database to which youre connected. To help you visualize a database,
it can create one or more diagrams illustrating some or all of the databases tables, columns,
keys, and relationships.
Youll use database diagrams in Exercise 8.2.

Designing Tables

THE BOTTOM LINE

Tables are database objects that contain all the data in a database. Each database has at least
one table, and frequently more. Data in tables is organized in a row-and-column format from
which is derived the relational part of your relational database management system. Each
row represents a unique record, and each column represents an attribute within the record.

Figure 8-1 shows the Person.Contact table from the AdventureWorks database. It contains a
row for each contact and columns representing contact information such as IDs, titles, names,
and e-mail addresses, among others.
Figure 8-1
Tables consist of columns and
rows, also called attributes
and records.

174 | Lesson 8

When you design a database, you should first determine what tables it needs, the type of data
that goes in each table, and, as you saw earlier in the book, which users can access each table.
The recommended way to create a table is to first define everything you need in the table,
including data restrictions and other components. Key decisions you need to make about the
table include the following:
The types of data the table will contain
The number of columns in the table and, for each column, the datatype and length, if
its required
Which columns will accept null values
Whether and where to use constraints or defaults and rules
The types of indexes that will be needed, where required, and which columns are
primary keys and which are foreign keys
Another good method for designing a table, especially in a complex or critical environment,
is to create a basic table, add some data to it, and then experiment with it for a while. This
method is useful because it gives you an opportunity to get an idea of what transactions are
common and what types of data are entered most frequently before you commit to a firm
design by adding constraints, indexes, defaults, rules, and other objects.

Deciding Whether Partitioning Is Appropriate


Tables in SQL Server can range from very small, having only a single record, to extremely
huge, with millions of records. These large tables can be difficult for users to work with
because of their sheer size. To make them smaller without losing any data, you can
partition your tables.
Partitioning tables works just like it sounds: You cut tables into multiple sections that can be
stored and accessed independently without the users knowledge. Suppose you have a table that
contains order information, and the table has about 50 million rows. That may seem like a big
table, but such a size isnt uncommon. To partition this table, you first need to decide on a partition column and a range of values for the column. In a table of order data, you probably have
an order date column, which is an excellent candidate. The range can be any value you like;
but because you want to make the most current orders easily accessible, you may want to set
the range at anything older than a year. Now you can use the partition column and range to
create a partition function, which SQL Server will use to spread the data across the partitions.
Next, you need to decide where to keep the partitioned data physically; this is called the partition schema. You can keep archived data on one hard disk and current data on another disk
by storing the partitions in separate filegroups, which can be assigned to different disks.
Once youve planned your partitions, you can create partitioned tables using the CREATE
TABLE function.

TAKE NOTE

To see what a partitioned table looks like, examine the AdventureWorks database:
The TransactionHistory and TransactionHistoryArchive tables are partitioned on the
ModifiedDate field.
Partitioning a table improves performance and simplifies maintenance. When you split a large
table into smaller, individual tables, queries that access only a fraction of the data can run
faster because there is less data to scan. Maintenance tasks, such as rebuilding indexes or backing up a table, can also run more quickly.
You can partition a database without splitting tables by physically putting tables on individual
disk drives. Putting a table on one drive and related tables on another drive can improve

Designing a Physical Database | 175

query performance because when queries that involve joins between the tables are run, multiple
disk heads read data at the same time. SQL Server filegroups can be used to specify the disks
on which to put the tables.
There are three types of partitioning:
Hardware partitioning. Hardware partitioning designs the database to take advantage of
the available hardware architecture, including multiprocessors and Redundant Array of
Inexpensive Disks (RAID) configurations.
Horizontal partitioning. Horizontal partitioning divides a table into multiple tables with the
same number of columns but fewer rows. For example, suppose a hospital has a table with a
billion rows of patient billing data. The table could be partitioned horizontally into 12 tables,
with each smaller table containing a months worth of data. If a user issues a query requiring
data for a specific month, it references only the appropriate table.
If you opt to partition tables horizontally, you should partition the tables so that queries reference as few tables as possible. Otherwise, excessive UNION queries, used to merge the tables
logically at query time, can affect performance.
Horizontal partitioning is typically used when data can be divided based on age or use. For
example, a table may contain data for the last five years, but only data from the current year
is regularly accessed. In this case, it makes performance sense to partition the data into five
tables, with each table containing data from only one year.
Vertical partitioning. Whereas horizontal partitioning divides tables based on rows, vertical
partitioning divides a table into multiple tables containing fewer columns. There are two
types of vertical partitioning:
Normalization: the process of removing redundant columns from a table and putting them
in secondary tables that are linked to the primary table by primary key and foreign key relationships.
Row splitting: divides the original table vertically into tables with fewer columns. Each
logical row in a split table matches the same logical row in the others. For example, joining
the tenth row from each of the split tables re-creates the original row.
Like horizontal partitioning, vertical partitioning lets queries scan less data, thus improving
query performance.
Vertical partitioning can also have an adverse impact on performance because analyzing data
from multiple partitions requires queries that join the tables, slowing the process. Vertical
partitioning can also negatively affect performance if partitions are very large.

Specifying Primary and Foreign Keys


You can use a primary key to ensure that each record in your table is unique in some
way. The primary key does this by creating a special type of index called a unique index.
An index is ordinarily used to speed up access to data by reading all the values in a
column and keeping an organized list of where the record that contains that value is
located in the table. A unique index not only generates that list, but also doesnt allow
duplicate values to be stored in the index. If a user tries to enter a duplicate value in the
indexed field, the unique index returns an error, and the data modification fails.
TAKE NOTE

When a column can be


used as a unique identifier for a row (such as
an identity column), its
referred to as a surrogate
or candidate key.

Take another look at Figure 8-1. In this case, assume the ContactID field is defined as a primary key. As you can see, you already have a contact with a ContactID of 1 in the table. If
one of your users tries to create another contact with a ContactID of 1, they will receive an
error and the update will be rejected, because ContactID 1 is already listed in the primary
keys unique index. (This is just an examplethe ContactID field has the identity property
set, which automatically assigns a number with each new record inserted and wont allow you
to enter a number of your own design.)

176 | Lesson 8

Choosing a Primary Key


The primary key must consist of a column (or columns) that contains unique values.
This makes an identity column a good candidate for becoming a primary key, because
the values contained therein are unique by definition. If you dont have an identity column, make sure you choose a column, or combination of columns, in which each value
is unique. Regardless of whether you use an identity column, when deciding which field
to use as a primary key, you should consider these factors:

Stability. If the value in the column is likely to change, it wont make a good primary key.
When you relate tables together, youre making the assumption that you can always track the
relation later by looking at the primary key values.
Minimality. The fewer columns in the primary key, the better. A primary key of customer_id
and order_id is superior to one of customer_id, order_id, and order_date. Adding the extra
column doesnt make the key more unique; it merely makes operations involving the primary
key slower.
Familiarity. If the users of your database are accustomed to a particular identifier for a type
of entity, it makes a good primary key. For example, you might use a part number to identify
rows in a table of parts.

TAKE NOTE

When a column has mostly unique values, its said to have high selectivity. When a
column has several duplicate values, its said to have low selectivity. The primary key field
must have high selectivity (entirely unique values).

LAB EXERCISE

Perform Exercise 8.3 in your lab


manual.

In Exercise 8.3, youll examine the Person.Contact table and modify a primary key.

USING FOREIGN KEYS


A foreign key is the column (or combination of columns) whose values match the primary
key in the same or another table. Its most commonly used in combination with a primary
key to relate two tables on a common column. It can also be defined to reference the columns
of a UNIQUE constraint in another table.
For example, assume you have two tables, Medications and Physicians, with the following columns, where PK means the primary key and FK means the foreign key:

M EDICATIONS

PHYSICIANS

MedicationID (PK)

PhysicianID (PK)

PhysicianID (FK)

LastName

Class

FirstName

Number

Specialty

Frequency

DateofHire

Designing a Physical Database | 177

You can relate the two tables on the PhysicianID column that they have in common. If you
use the PhysicianID field in the Physicians table as the primary key (which you already have),
you can use the PhysicianID field in the Medications table as the foreign key that relates the
two tables. You wont be able to add a record to the Medications table if there is no matching
record in the Physicians table. Not only thatyou cant delete a record in the Medications
table if there are matching records in the Physicians table.
With a foreign key in place, you can protect records not only in one table but in associated
related tables from improper updates. Users cant add a record to a foreign-key table without
a corresponding record in the primary-key table, and primary-key records cant be deleted if
they have matching foreign-key records.
The relationship between a primary key and a foreign key can take one of several forms. It
can be one-to-many. It can be one-to-one, where precisely one row in each table matches
one row in the other. Or it can be many-to-many, where multiple matches are possible
(imagine a table of physicians and a table of patients, each of whom might see many
physicians).

TAKE NOTE

To implement a many-to-many relation in SQL Server, you need to use an intermediate


joining table to break the relation into two separate one-to-many relations.

SPECIFYING COLUMN DATATYPES AND CONSTRAINTS


Each field in a table has a specific datatype, which restricts the type of data that can be
inserted. For example, if you create a field with a datatype of int (short for integer, which is
a whole number [a number with no decimal point]), you wont be able to store characters
(AZ) or symbols (such as %, *, #) in that field because SQL Server allows only numbers
to be stored in int type fields. In Figure 8-2, you can see the datatypes listed in the second
column.
Youll notice that some the fields in this table are either char or varchar (short for character
and variable character, respectively), which means you can store characters in these fields as
well as symbols and numbers. However, if numbers are stored in these fields, you wont be able
to perform mathematical functions on them because SQL Server sees them as characters, not
numbers.
Figure 8-2
Table field names and
datatypes

SPECIFYING SQL SERVER BUILT-IN DATATYPES


The following is a list of all the SQL Server datatypes, their uses, and their limitations:
Bigint. This datatype includes integer data from 263 (9,223,372,036,854,775,808) through
263 1 (9,223,372,036,854,775,807). It takes 8 bytes of hard-disk space to store and is useful
for extremely large numbers that wont fit in an int type field.

178 | Lesson 8

Binary. This datatype includes fixed-length, binary data with a maximum length of 8,000 bytes.
Its interpreted as a string of bits (for example, 11011001011) and is useful for storing anything
that looks better in binary or hexadecimal shorthand, such as a security identifier.
Bit. This datatype can contain only a 1 or a 0 as a value (or null, which is no value).
Char. This datatype includes fixed-length, non-Unicode character data with a maximum
length of 8000 characters. Its useful for character data that will always be the same length,
such as a State field, which will contain only two characters in every record. This uses the same
amount of space on disk no matter how many characters are stored in the field. For example,
char(8) always uses 8 bytes of space, even if only four characters are stored in the field.
Datetime. This datatype includes date and time data from January 1, 1753, to December
31, 9999, with time values rounded to increments of .000, .003, or .007 seconds. This takes
8 bytes of space on the hard disk and should be used when you need to track very specific
dates and times.
Decimal. This datatype includes fixed-precision and scale-numeric data from 1038 + 1
through 1038 1 (for comparison, this is a 1 with 38 zeros following it). It uses two
parameters: precision and scale. Precision is the total count of digits that can be stored in the
field, and scale is the number of digits that can be stored to the right of the decimal point. If
you have a precision of 5 and a scale of 2, your field has the format 111.22. This type should
be used when youre storing partial numbers (numbers with a decimal point).
Float. This datatype includes floating-precision number data from 1.79E + 308 through
1.79E + 308. Some numbers dont end after the decimal pointpi is a fine example. For such
numbers, you must approximate the end, which is what float does. For example, if you set a
datatype of float(2), pi will be stored as 3.14, with only two numbers after the decimal point.
Identity. This isnt a datatype, but a property, typically used in conjunction with the int
datatype. Its used to increment the value of the column each time a new record is inserted.
Int. This datatype can contain integer (or whole number) data from 231 (2,147,483,648)
through 231 1 (2,147,483,647). It takes 4 bytes of hard-disk space to store and is useful for
storing large numbers that youll use in mathematical functions.
Money. This datatype includes monetary data values from 263 (922,337,203,685,477.5808)
through 263 1 (922,337,203,685,477.5807), with accuracy to a ten-thousandth of a monetary unit. It takes 8 bytes of hard-disk space to store and is useful for storing sums of money
larger than 214,748.3647.
Nchar. This datatype includes fixed-length, Unicode data with a maximum length of 4,000
characters. Like all Unicode datatypes, its useful for storing small amounts of text that will be
read by clients who use different languages.
Numeric. This is a synonym for decimal.
Nvarchar: This datatype includes variable-length, Unicode data with a maximum length of
4,000 characters. Its the same as nchar, except that nvarchar uses less disk space when there
are fewer characters.
Nvarchar(max). This datatype is just like nvarchar; but when the (max) size is specified, the
datatype holds 231 1 (2,147,483,647) bytes of data.
Real. This datatype includes floating-precision number data from 3.40E + 38 through
3.40E + 38. This is a quick way of saying float(24)its a floating type with 24 numbers
represented after the decimal point.

Designing a Physical Database | 179

Smalldatetime. This datatype includes date and time data from January 1, 1900, through
June 6, 2079, with an accuracy of 1 minute. It takes only 4 bytes of disk space and should be
used for less specific dates and times than youd store in datetime datatype.
Smallint. This datatype includes integer data from 215 (32,768) through 215 1 (32,767).
It takes 2 bytes of hard-disk space to store and is useful for slightly smaller numbers than you
would store in an int type field, because smallint takes less space than int.
Smallmoney. This datatype includes monetary data values from 214,748.3648 through
214,748.3647, with accuracy to a ten-thousandth of a monetary unit. It takes 4 bytes of space
and is useful for storing smaller sums of money than would be stored in a money type field.
Sql_variant. This isnt a datatype either; it lets you store values of different datatypes.
The only values it cant store are varchar(max), nvarchar(max), text, image, sql_variant,
varbinary(max), xml, ntext, timestamp, or user-defined datatypes.
Timestamp. This datatype is used to stamp a record with an incrementing counter when
the record is inserted and every time its updated thereafter. Its useful for tracking changes to
your data.
Tinyint. This datatype includes integer data from 0 through 255. It takes 1 byte of space on
the disk and is limited in usefulness because it stores values only up to 255. Tinyint may be
useful for something like a product-type code when you have fewer than 255 product codes.
Uniqueidentifier. The NEWID() function is used to create globally unique identifiers that
look like the following example: 6F9619FF-8B86-D011-B42D-00C04FC964FF. These
unique numbers can be stored in the uniqueidentifier type field; theyre useful for creating
tracking numbers or serial numbers that have no possible way of being duplicated.
Varbinary. This datatype includes variable-length, binary data with a maximum length of
8,000 bytes. Its just like binary, except that varbinary uses less hard-disk space when fewer
bits are stored in the field.
Varbinary(max). This datatype has the same attributes as the varbinary datatype; but when
the (max) size is declared, the datatype can hold 231 1 (2,147,483,647) bytes of data. This is
very useful for storing binary objects like JPEG image files or Word documents.
Varchar. This datatype includes variable-length, non-Unicode data with a maximum of 8,000
characters. Its useful when the data wont always be the same length, such as in a first-name
field where each name has a different number of characters. This uses less disk space when
there are fewer characters in the field. For example, if you have a field of varchar(20), but
youre storing a name with only 10 characters, the field will take up only 10 bytes of space,
not 20. The field will accept a maximum of 20 characters.
Varchar(max). This is just like the varchar datatype; but you specify a size of (max). The
datatype can hold 231 1 (2,147,483,647) bytes of data.
Xml. This datatype is used to store entire XML documents or fragments (a document that is
missing the top-level element).

TAKE NOTE

The text, ntext, and image datatypes have been deprecated as of the 2005 version of SQL
Server. You should replace them with varchar(max), nvarchar(max), and varbinary(max)
when you design tables and replace them in existing tables.

180 | Lesson 8

SQL Server 2008 introduces eight new datatypes. Four new date and time related datatypes
provide a greater degree of control and precision of chronological data. Two new spatial
datatypes provide specific methods of storing positional data. The other two new datatypes
provide abilities to better handle large data objects and hierarchically structured data. The
following is a list of the new SQL Server 2008 datatypes, their uses, and limitations:
Date: This datatype uses the format of YYYY-MM-DD and is compliant with ANSI standards.
It uses 3 bytes of storage and can contain values from 0001-01-01 through 9999-12-31.
Datetime2: This datatype provides for a higher degree of time precision. This datatype can
store values from 0001-01-01 00:00:00.0000000 through 9999-12-31 23:59:59.9999999 and
uses from 6 to 8 bytes of storage depending upon the time precision.
Datetimeoffset: This datatype is different from the other date and time datatypes in that it contains a timezone offset value. The format and range is the same as the new Datetime2 datatype
except that an offset in hours and minutes follows the time value. This offset value can be either
positive or negative. Storage space ranges from 8 to 10 bytes depending upon the time precision.
TAKE NOTE

Filestream: The use of Filestream is as an extension property to the existing varbinary(max)


datatype. The use of this property allows BLOB (Binary Large OBject) data such as photographs, audio, etc. to be stored outside of the SQL database in the Windows NTFS file system
but still provide for control and management using SQL Server. Special configuration steps
need to be taken to enable Filestream usage.
Hierarchyid: This datatype provides the ability to store a complex nested parent-child hierarchy of data in a single table column. The size of this datatype increases with the depth and
quantity of data to be stored in the rows of the table.
Geography: This datatype stores positional data using geometric shapes (such as Point, Polygon,
LineString, etc) and latitude and longitude coordinates using a geodetic (round Earth) model.
Geometry: This datatype is similar to the Geography datatype. The difference is that Geometry
uses a planar (flat Earth) model.
Time: This datatype only stores time values and is compliant with ANSI standards. Data is
stored in hh:mm:ss[.nnnnnn] format and uses from 3 to 5 bytes of storage depending upon the
time precision.

SPECIFYING USER-DEFINED DATATYPES


SQL Server allows you to create your own datatypes based on your needs. For example, you
might want to create a State datatype based on the char datatype with all the parameters (length,
capitalization rules, and so on) prespecified, including any necessary constraints and defaults.

Using Constraints
CERTIFICATION READY?
Rules (constraint objects
defined once and used
on multiple objects)
have been available
in previous editions of
SQL Server and remain
available in both SQL
Server 2005 and SQL
Server 2008. Know
when to use them if
only to eliminate a
possible answer on your
certification test.

As youve seen from the beginning of this Lesson, tables are wide open to just about any
kind of data when theyre first created. The only starting restriction is that users cant violate the datatype of a field; other than that, the tables are fairly insecure from whatever
your users want to put in them.
So far youve seen that getting control of the table isnt difficult, but it does require work.
Youve learned how to use primary and foreign keys to control what happens to data and limit
what can be entered and what cant.
Now, delve into the issue of how to restrict what data your users can enter in a field and how
to maintain data integrity. You can enforce three types of integrity:
Entity integrity. Entity integrity is the process of making sure each record in the table is unique
in some way. Primary keys are the main way of accomplishing this; they can be used with foreign

Designing a Physical Database | 181

keys in enforcing referential integrity. Unique constraints are used when the table includes a field
that isnt part of the primary key but that needs to be protected against duplicate values.
Referential integrity. Referential integrity is the process of protecting related data that is stored
in separate tables. A foreign key is related to a primary key. The data in the primary key table
cant be deleted if there are matching records in the foreign-key table, and records cant be
entered in the foreign-key table if there is no corresponding record in the primary-key table. The
only way around this behavior is to enable cascading referential integrity, which lets you delete or
change records in the primary-key table and have those changes cascade to the foreign-key table.
Domain integrity. Domain integrity is the process of restricting what data your users can enter
in a field. Check constraints and rules can be used to validate the data the users try to enter
against a list of acceptable data, and defaults can be used to enter data for the user as defaults.

USING CHECK CONSTRAINTS


Check constraints enforce domain integrity by limiting the values that are accepted by a
column. Theyre similar to foreign-key constraints in that they control the values that are put
in a column; but whereas foreign-key constraints get their list of valid values from another
table, check constraints get their valid values from a logical expression that isnt based on
data in another column. For example, you can limit the range of values for a PayIncrease
column by creating a check constraint that allows only data that ranges from 36 percent.
This prevents salary increases from being entered in the table that exceed or fall below the
organization established range beyond the regular salary range. As you can imagine, this is
helpful when you want to prevent a rogue accountant or disgruntled employee from finding
less-than ethical ways to get money from the system (or reduce your pay increase for not
helping them with their computer problem).
You create a check constraint using any Boolean expression that returns either true or false
based on the logical operators. For the previous example, the logical expression is PayIncrease
>= 0.03 AND PayIncrease <= 0.06.
You can apply multiple check constraints to a single column. You can also apply a single
check constraint to multiple columns by creating it at the table level. For example, a multiple
column check constraint can be used to confirm that any row with a country/region column
value of USA also has to have a two-character value in the state column. This allows for
multiple conditions to be checked in one location.
USING DEFAULT CONSTRAINTS
Check constraints serve no purpose if your users forget to enter data in a columnthat is
where default constraints come in. If users leave fields blank by not including them in the
INSERT or UPDATE statement they use to add or modify a record, default constraints are
used to fill in those fields. There are two types of defaults: object and definition.
Object defaults are defined when you create your table and affect only the column on which
theyre defined. Definition defaults are created separately from tables and are designed to be
bound to a user-defined datatype.
In addition to making sure there is an entry, both types can, when used properly, save data
entry time. For example, suppose that most of your customers live in the United States and
that your data-entry people must type USA in the country field for every new customer. That
may not seem like much work, but if you have a sizable customer base, those three characters
can add up to a lot of typing. By using a default constraint, your users can leave the country
field intentionally blank if the customer is from the USA, and SQL Server will fill it in.
USING UNIQUE CONSTRAINTS
There are two major differences between primary key constraints and unique constraints.
First, primary keys are used with foreign keys to enforce referential integrity, and unique keys
arent. Second, unique constraints allow null (blank) values to be inserted in the field, whereas
primary keys dont allow null values. However, as with any value participating in a unique
constraint, only one null value is allowed per column. Aside from that, they serve the same
purposeto ensure that nonrepeating data is inserted in a field.

182 | Lesson 8

TAKE NOTE

A unique constraint
can be referenced by a
foreign-key constraint.

You should use a unique constraint when you need to ensure that no duplicate values can
be added to a field that isnt part of your primary key. A good example of a field that might
require a unique constraint is a Social Security Number field, because all the values contained
therein need to be unique; yet there would most likely be a separate employee ID field that
would be used as the primary key.

Deciding Whether to Persist Computed Columns


Earlier, you saw that you can create user-defined datatypes. You can also create computed
columns. These are special columns that dont contain any data of their own, but display
the output of an expression performed on data in other columns of the table. For
example, in the AdventureWorks sample database, the TotalDue column of the Sales.
SalesOrderHeader table is a computed column. It contains no data of its own but displays
the sum of the Subtotal, TaxAmt, and Freight columns as a single value.

Normally, computed columns are treated as virtual columns that arent physically stored in the
table, and their values are recalculated every time theyre referenced in a query. However, you
can use the PERSISTED keyword in the CREATE TABLE and ALTER TABLE statements
to require SQL Server 2005 to physically store computed columns in the table. When that
happens, the computed column values are updated when any columns that are part of their
calculation change.
Computed columns can be used in select lists, WHERE clauses, ORDER BY clauses, or any
other locations in which regular expressions can be used.
You must always persist computed columns in the following cases:
The computed column is used as a partitioning column of a partitioned table.
The computed column references a Common Language Runtime (CLR) function. In
this case, the computed column must be persisted so that indexes can be created on it.
The computed column is used as a check, foreign-key, or not-null constraint.

Specifying Physical Location of Tables, Including


Filegroups and a Partitioning Scheme
There is no magical formula for the placement of tables or other components of a SQL
Server configuration. As always, your primary considerations should be performance and
recoverability. When placing tables or filegroups, or determining whether to partition
across multiple disks, consider all elements.

Designing Filegroups
THE BOTTOM LINE

Database objects, such as tables, indexes, views, and files, can be grouped together in
filegroups for allocation and administration purposes. There are two types of filegroups:
primary and user-defined.

The primary filegroup contains the primary data file and any other files not specifically
assigned to another filegroup. All pages for the system tables are allocated in the primary
filegroup. User-defined filegroups are any filegroups specified by using the FILEGROUP
keyword in a CREATE DATABASE or ALTER DATABASE statement.

Designing a Physical Database | 183

TAKE NOTE

Log files are never part of a filegroup. Log space is managed separately from data space.

No file can be a member of more than one filegroup. Tables, indexes, and large object data
can be associated with a specified filegroup, and all their pages are allocated in that filegroup.
Alternatively, the tables and indexes can be partitioned. In that case, the data of partitioned
tables and indexes is divided into units, each of which can be placed in a separate filegroup.
One filegroup in each database is designated the default filegroup. When a table or index is
created without specifying a filegroup, its assumed that all pages will be allocated from the
default filegroup. Only one filegroup at a time can be the default filegroup. Members of the
db_owner fixed database role can switch the default filegroup from one filegroup to another.
If no default filegroup is specified, the primary filegroup is the default filegroup.

Designing Filegroups for Performance


Now that youve started designing your database and created some secondary data
files, you can logically group them together into a filegroup to help manage disk-space
allocation. By default, all the data files you create are placed in the primary filegroup;
when you create an object (for example, a table or a view), that object can be created on
any one of the files in the primary filegroup. If you create different filegroups, though,
you can specifically tell SQL Server where to place your new objects. Doing so can help
with performance.

For example, suppose you have a sales database with several tables. Some of the tables are
mostly static, whereas others are volatile and frequently written to. If all these tables are
placed in the same filegroup, you have no control over the file in which theyre placed.
However, if you place a secondary data file on a separate physical hard disk (for example, disk D)
and place another secondary data file on another physical hard disk (disk E, perhaps), you can
place each data file in its own filegroup. This gives you control over where objects are created.
In this case, the best option is to place the first secondary data file by itself in a filegroup
named READ and to place the second secondary data file in its own filegroup named
WRITE. Now, when you create a table that is meant to be primarily read from, you can tell
SQL Server to create it on the file in the READ group, and you can place tables that are
meant to be written to in the WRITE filegroup.

TAKE NOTE

As you learned in Lesson 2, secondary data files make up all the data files other than the
primary data file. Some databases may not have any secondary data files, whereas others
have several secondary data files.

Using files and filegroups improves database performance, because it lets a database be created
across multiple disks, multiple disk controllers, or RAID systems. For example, if your computer has four disks, you can create a database that is made up of three data files and one log
file, with one file on each disk. As data is accessed, four read/write heads can access the data
in parallel at the same time. This speeds up database operations.
Additionally, files and filegroups enable data placement, because a table can be created in
a specific filegroup. This improves performance because all I/O for a specific table can be
directed at a specific disk. For example, a heavily used table can be put on one file in one
filegroup, located on one disk, and the other less heavily accessed tables in the database can be
put on the other files in another filegroup, located on a second disk.

184 | Lesson 8

Designing Filegroups for Recoverability

REF

Lesson 11 covers piecemeal restores in more


detail.

In SQL Server 2005 and 2008, databases made up of multiple filegroups can be restored
in stages by a process known as piecemeal restore.

When multiple filegroups are used, the files in a database can be backed up and restored individually. Under the simple recovery model, file backups are allowed only for read-only files.
Using file backups can increase the speed of recovery by letting you restore only damaged files
without restoring the rest of the database. For example, if a database is made up of several
files physically located on different disks, and one disk fails, then only the file on the failed
disk has to be restored.

Designing Filegroups for Partitioning


You can achieve performance gains and better I/O balancing by using filegroups to place
a partitioned table on multiple files. As you know, filegroups can consist of one or more
files, and each partition must map to a filegroup. A single filegroup can be used for multiple partitions; but for better data management, including more granular backup control,
you should design your partitioned tables wisely so that only related or logically grouped
data resides on the same filegroup.

Designing Index Usage

THE BOTTOM LINE

REF

The next section of this


Lesson contains more
details on views.

In a database design, an index is an on-disk structure associated with a table or view that
speeds retrieval of rows from the table or view. An index contains keys built from one
or more columns in the table or view. These keys are stored in a structure (B-Tree) that
enables SQL Server to find the row or rows associated with the key values quickly and
efficiently.

If you wanted to find the topic filegroup in this book, how would you go about it? You could
flip through pages one at a time, looking for the word filegroups or you might examine the
table of contents at the front of the book. Both these methods work, but they arent efficient.
Instead, youd probably flip to the back of the book and review the index for the word
filegroup. If the index is well constructed, it will contain several entries and probably some
subheadings to help you differentiate the topic.
Two types of indexes are associated with a table or a view: clustered and nonclustered.
Clustered indexes sort and store the data rows in the table or view based on their key values.
These are the columns included in the index definition. There can be only one clustered
index per table because the data rows can be sorted in only one order.
The data rows in a table are stored in sorted order only when the table contains a clustered
index. When a table has a clustered index, the table is called a clustered table. If a table has no
clustered index, its data rows are stored in an unordered structure called a heap.
Nonclustered indexes have a structure separate from the data rows. A nonclustered index
contains the nonclustered index key values, and each key value entry has a pointer to the data
row that contains the key value.
Table 8-1 shows the differences between clustered and nonclustered indexes.

Designing a Physical Database | 185


Table 8-1
Differences between Clustered
and Nonclustered Indexes

TAKE NOTE

C LUSTERED

NONCLUSTERED

Only 1 allowed per table

Up to 249 allowed per table

Physically rearranges the data in the table


to conform to the index constraints

Creates a separate list of key values with


pointers to the location of the data in the
data pages

For use on columns that are frequently


searched for ranges of data

For use on columns that are searched


for single values

For use on columns with low selectivity

For use on columns with high selectivity

You can have only one clustered index per table because clustered indexes physically
rearrange the data in the indexed table.

Which type of index should you use, and where? In a few moments, youll look at how you
can design indexes for faster data access and how to perform data modification. First, examine
some basic guidelines and strategies you should employ when designing indexes.
The first consideration is making sure you understand the characteristics of the database.
For example, is it an On-Line Transaction Processing (OLTP) database with frequent data
modifications, or a Decision Support System (DSS) or data-warehousing (On-Line Analytical
Processing [OLAP]) database that contains primarily read-only data?
Next, what are the characteristics of the most frequently used queries? For example, knowing
that a frequently used query joins two or more tables will help you determine the best type of
indexes to use.
You should also have a clear idea of the characteristics of the columns used in the queries.
For example, an index is ideal for columns that have an integer datatype and are also unique
or non-null.
Determine which index options may enhance performance when the index is created or
maintained. For example, creating a clustered index on an existing large table will benefit
from the ONLINE index option. The ONLINE option allows for concurrent activity on the
underlying data to continue while the index is being created or rebuilt.
Finally, make sure you give thought to the optimal storage location for your indexes.

Designing Indexes to Make Data Access Faster


and to Improve Data Modification
When you design an index, you should always follow these guidelines to maximize data
access and make it easier to modify data:

Large numbers of indexes on a table negatively affect the performance of INSERT,


UPDATE, and DELETE statements because all indexes must be adjusted appropriately
as data in the table changes. Note that UPDATE statements are only effected if the
indexed column data is changed.
Avoid over-indexing heavily updated tables. You should keep indexes narrowthat is,
with as few columns as possible.
Use many indexes to improve query performance on tables with low update requirements but large volumes of data. Large numbers of indexes can help the performance
of queries that dont modify data, such as SELECT statements, because the Query
Optimizer has more indexes to choose from to determine the fastest access method.

186 | Lesson 8

Indexing small tables may not be worthwhile, especially if it takes the Query Optimizer
longer to traverse the index searching for data than performing a simple table scan
would. Although the indexes on small tables may never by used, they must still be maintained as data in the table changes, thus slowing performance and retarding data modification with unnecessary resource usage.
Indexes on views can provide significant performance gains when the view contains
aggregations, table joins, or a combination of aggregations and joins. The view doesnt
have to be explicitly referenced in the query for the Query Optimizer to use it.
Use the Database Tuning Advisor to analyze your database and make index
recommendations.

Creating Indexes with the Database Tuning Advisor


Ironically, the best way to plan and place indexes is to let SQL Server do it itself. SQL
Server comes with an extremely powerful tool called SQL Server Profiler, whose primary function is to monitor SQL Server. This tool provides an interesting fringe benefit
when it comes to indexing. Profiler specifically monitors everything that happens to the
MSSQLServer service, which includes all the INSERT, UPDATE, DELETE, and SELECT
statements that get executed against your database. Because Profiler can monitor what your
users are doing, it makes sense that Profiler can figure out what columns can be indexed to
make these actions faster. Enter the Database Tuning Advisor.

LAB EXERCISE

When you use Profiler, you generally save all the monitored events to a file on disk. This file
is called a workload, without which the Database Tuning Advisor cant function. To create the
workload, you need to run a trace (which is the process of monitoring) to capture standard
user traffic throughout the busy part of the day.

Perform Exercise 8.4 in your lab


manual.

In Exercise 8.4, youll walk through the process of using Database Tuning Advisor to create an
index.
Once your indexes have been created, they should be maintained on a regular basis to make
certain theyre working properly.

Specifying Physical Placement of Indexes


Part of your design plan should be to determine the storage location for the indexes you
design. Use the following guidelines and recommendations as part of your determination:

Storing a nonclustered index on a filegroup that is on a different disk than the table filegroup improves performance because multiple disks can be read at the same time.
Clustered and nonclustered indexes can use a partition scheme across multiple filegroups. When you consider partitioning, determine whether the index should be
alignedthat is, partitioned in essentially the same manner as the tableor partitioned
independently.
Create nonclustered indexes on a filegroup other than the filegroup of the base table.
This will result in performance gains if the filegroups are using different physical drives
with their own controllers.
Partition clustered and nonclustered indexes to span multiple filegroups.
Because you cant predict what type of access will occur and when it will occur, it may be
better to spread your tables and indexes across all filegroups. Doing so guarantees that all
disks are being accessed, because all data and indexes are spread evenly across all disks,
regardless of which way the data is accessed. This is also a simpler approach for system
administrators.

Designing a Physical Database | 187

Designing Views
THE BOTTOM LINE

A view is nothing more than a virtual table whose contents are defined by a query. A view
is the filter through which you look at one or more columns from one or more base tables.

In the real world, many companies have extremely large tables that contain hundreds of
thousands, if not millions, of records. When your users query such large tables, they usually
dont want to see all of these millions of records; they want to see only a small portion, or
subset, of the available data. You have two ways to return a small subset of data: You can use a
SELECT query with the WHERE clause specified, or you can use a view.
The SELECT query approach works well for queries that are executed infrequently, but this
approach can be confusing for users who dont understand T-SQL code. For example, to
query the AdventureWorks database to see only the first-name, last-name, and phone fields
for contacts in Connecticuts 203 area code, you can execute the following query:
USE AdventureWorks
SELECT Lastname, Firstname, Phone FROM Person.Contact
WHERE phone LIKE '203%'

That query returns a small subset of the data; but how many of your end users understand
the code required to get this information? Probably very few. You can write the query into
your front-end code, which is the display that your users see (usually in C# or a similar
language); but then the query will be sent over the network to the server every time its
accessed, and that eats up network bandwidth.
The best approach in this sort of a situation is to create a view for the users. Like a real table,
a view consists of a set of named columns and rows of data. The only difference between the
view and the table is that your view doesnt contain any datait shows the data, much like
the television set doesnt contain any people, but just shows you pictures of the people in the
studio.
Unless its indexed, a view doesnt exist as a stored set of data values in a database. The rows
and columns of data come from tables referenced in the query defining the view and are
produced dynamically when the view is referenced.
Now that you have a basic understanding of views, look at how to integrate them in to your
physical database design.

Analyzing Business Requirements


As youve just learned, views are generally used to focus, simplify, and customize the perception each user has of the database. Views can be used as security mechanisms by letting
users access data through the view without granting the users permissions to directly access
the views underlying base tables. Views can be used to provide a backward-compatible
interface to emulate a table that used to exist but whose schema has changed. Views can
also be used when you copy data to and from Microsoft SQL Server to improve performance and to partition data. How views are used depends heavily on your assessment of
your organizations business requirements.

188 | Lesson 8

Views serve a number of functions and can have a number of roles:


Focusing data for the user. Views let users focus on specific data that interests them and on
the specific tasks for which theyre responsible. Unnecessary or sensitive data can be left out
of the view.
Simplifying data manipulation. You can define frequently used joins, projections, UNION
queries, and SELECT queries as views so that users dont have to specify all the conditions
and qualifications every time an additional operation is performed on that data.
Providing backward compatibility. Views enable you to create a backward-compatible interface for a table when its schema changes.
Customizing data. Views let different users see data in different ways, even when theyre
using the same data at the same time. This is especially useful when users who have many
different interests and skill levels share the same database.
Exporting and importing data. Views can be used to export data to other applications.
For example, you may want to use the Customer and SalesOrderHeader tables in the
AdventureWorks database to analyze sales data using Microsoft Excel. To do this, you can
create a view based on the Customer and SalesOrderHeader tables. You can then export the
data defined by the view.
Combining partitioned data across servers. The T-SQL UNION set operator can be used
within a view to combine the results of two or more queries from separate tables into a single
result set. This appears to the user as a single table called a partitioned view. In a partitioned
view, the data still appears as a single table and can be queried without having to manually
reference the correct underlying table.

Choosing the Type of View


SQL Server uses three types of views: standard, indexed, and partitioned. Each has its own
strengths and weaknesses.

Standard views. Combining data from one or more tables through a standard view lets you
satisfy most of the benefits of using views. These include focusing on specific data and simplifying data manipulation.
Indexed views. An indexed view is a view that has been materialized. This means it has been
computed and stored. You index a view by creating a unique clustered index on it. Indexed
views dramatically improve the performance of some types of queries. Such views work best
for queries that aggregate many rows. They arent well-suited for underlying data tables that
are frequently updated.
Indexed views typically dont improve the performance of the following types of queries:

OLTP systems that have many writes


Databases that have many updates
Queries that dont involve aggregations or joins
Aggregations of data that have a lot of different values for the GROUP BY key

Partitioned views. A partitioned view joins horizontally partitioned data from a set of
member tables across one or more servers. As you learned, this has the effect of making the
data appear to the user as if theyre one table. A view that joins member tables on the same
instance of SQL Server is a local partitioned view.
A view that joins data from tables across servers is called a distributed partitioned view.
Distributed partitioned views are used to implement a federation of database servers (which
is not part of the Star Trek universe). A federation of database servers (FDS) is a group of
independently administered servers that cooperate to share the processing load of a system. By

Designing a Physical Database | 189

partitioning data, you can create an FDS, which lets you scale out a set of servers to support
the processing requirements of large, multitiered Web sites.

Specifying Row and Column Filtering


Views function to help focus data, but the result of a view on a database can still be
millions of records long, and your users may still be overwhelmed. Or, you may only
want to call for a specific part of the view output.
LAB EXERCISE

Perform Exercise 8.5 in your lab


manual.

Filtering views is simple, as youll see in Exercise 8.5.

S K I L L S U M M A RY
The principal building block of any database infrastructure design, the physical database, has
been the focus of this Lesson, and its mastery has required us to go through quite a bit of
material. First you learned that a database is a container for other objects, such as tables
and views, and that without databases to contain all these objects, your data would be a
hopeless mess.
You learned that a database consists of up to three kinds of files: primary data files, secondary
data files, and transaction log files. The primary data files are used to store user data and
system objects that SQL Server needs to access your database. The secondary data files store
only user information and are used to expand your database across multiple physical hard
disks. The transaction log files are used for up-to-the-minute recoverability by keeping track of
all data modifications made on the system before theyre written to the data files.
You were also introduced to the value of normalization and when to selectively allow
denormalization for performance purposes. You learned how to use SQL Server scripts to
document a database and how to use the Database Diagram Designer to diagram it.
You have learned that you should sit down with a pencil and paper and think about how the
tables will be laid out before you create them, and that you need to decide what the tables
will contain, making the tables as specific as possible. You also learned that tables are made
up of fields or columns (which contain a specific type of data) and rows (an entity in the
table that spans all fields). Each column in the table has a specific datatype that restricts the
type of data it can holda field with an int datatype cant hold character data, for example.
Then you found you can create your own datatypes, which are system datatypes with all the
required parameters presupplied.
Tables are open to just about any kind of data when theyre first created. The only restriction
is that users cant violate the datatype of a column. To restrict the data your users can enter
in a column, you learned how to enforce three types of integrity domain, entity, and
referentialthrough check, default, and unique constraints, as well as primary and
foreign keys.
Youve learned how using files and filegroups improves database performance, because it lets
a database be created across multiple disks, multiple disk controllers, or RAID systems.
Data access can be accelerated by using indexes at the expense of slowing data entry.
You first looked at the clustered index. This type of index physically rearranges the data
in the database file. This property makes the clustered index ideal for columns that are
constantly being searched for ranges of data and that have low selectivity, meaning
several duplicate values.
Nonclustered indexes dont physically rearrange the data in the database; rather, they create
pointers to the actual data. This type of index is best suited to high-selectivity tables (with few
duplicate values) where single records are desired rather than ranges.

190 | Lesson 8
You also learned to design indexes by using the Database Tuning Advisor, a tool designed to
take the stress of planning the index off you and place it on SQL Server. This knowledge of
indexing will make it easier for you to plan indexes so that you can speed up data access for
your users.
Views dont contain any dataits just another means of seeing the data in the underlying,
base table. You also learned about the three types of views and when to use them, and how
to filter data in a view.
For the Certification Examination:

Be familiar with normalization and denormalization. Its important that you know why
databases are normalized and when you should opt to selectively denormalize a database.
Pay particular attention to the performance parameters that dictate when these need to
be done.

Know how to document and diagram a database. Make sure you understand the uses of
the Script As process for creating a SQL script of a database object and how to use the
Database Diagram Designer.

Be familiar with partitioning. Partitioning is a feature introduced in SQL Server 2005. Its
role in performance enhancement, especially over multiple databases, is critical.

Understand constraints and keys. Make sure you know how primary and foreign keys
function, as well as check, default, and unique constraints. You should be familiar with the
best situation in which to use each.

Understand filegroups. Filegroups are a key method for maximizing SQL Server database
performance. You should be familiar with the performance enhancement they offer and
their restrictions and limitations.

Understand indexes. You should know the basic differences between the types of indexes.

Understand views. You should know that the purpose of a view is to focus data for users.

Know the three different types of views and when to use them.

Knowledge Assessment
Case Study
Trevallyn Travel
Trevallyn Travel provides a variety of travel services. It has nine storefront agencies in
six North American cities, with its main office in New York. The company also serves
worldwide customers through an online travel agency.

Planned Changes
Trevallyn Travel plans to upgrade all existing SQL Server computers to SQL Server
2005. The management of the company wants a complete review of the existing
physical database design infrastructure to ensure that its aligned with business
requirements and optimizes performance.

Existing Data Environment


All SQL Server computers are located in the main office in New York. Currently, all
SQL Server computers are installed with a single default instance.
Existing databases are described in the following table:

Designing a Physical Database | 191

S ERVER N AME

D ATABASE N AME

S IZE

D ESCRIPTION

Launceston

HR

500 MB

Employee information, benefits,


commission data

Devonport

Storefront

4 GB

Reservation tracking and completed


travel forms for storefront travel
agencies

Hobart

OnLine-ReadOnly

6 GB

Read-only subscriber to the


TravelOnLine database. Provides
information on existing reservations
to Internet customers

10 GB

Reservation tracking and completed


travel forms for the online travel agency

Ravenwood

Travel-OnLine

The Storefront database is accessed though a Visual Basic application. The TravelOnLine
and OnLineReadOnly databases are accessed through a web services application.

Existing Infrastructure
The TravelOnLine and Storefront databases are mission critical. The current backup
strategy includes nightly full backups, hourly transaction-log backups, and the bulklogged recovery model.
System databases are maintained on a hard disk set that is separate from the user
databases.

Business Requirements
The TravelOnLine database is the busiest and should be optimized accordingly.
In the Reservation table in the Storefront database, reservations that were made in
the last six months should be retrieved the fastest.
The distribution server has a large amount of free disk space. The distribution
database must be able to be restored from the most recent backup and then receive
changes from the publication database, allowing replication to continue.
A single drive failure should not cause a server to fail.
The TravelOnLine database has a table named Pax, which holds passenger information.
(Pax is travel-agent jargon for passenger.) Any optimization that occurs on the table
should not affect current indexes. The table contains the following columns:

PaxID
PaxName
Address
City
Region
PostalCode
Phone
PreferredAirline

The most common query to this table looks up the passengers name.
Reservation records in the TravelOnLine database have a status field that can have one
of three settings: 1 (received), 2 (in process), or 3 (completed). Users can retrieve and
update incomplete reservations through a view, but they must not be able to complete
orders through the view.

192 | Lesson 8

Multiple Choice
Circle the letter or letters that correspond to the best answer or answers.
Use the information in the previous case study to answer the following questions:
1. You need to define the datatype for a new column named MeritScore in the HR
database. Which option should you select?
a. Use the text datatype.
b. Use the nvarchar(max) datatype.
c. Use the vchar(max) datatype.
d. Set the large value.
2. You need to make recommendations for maximizing the performance of queries based
on passenger names from the Pax table. What do you recommend?
a. Create an index on the passenger name and ID columns. Set the index fill factors at
10 percent.
b. Create a nonclustered index on only the passenger name column.
c. Create a clustered index on the passenger name column.
d. Create a nonclustered index, using the INCLUDE clause for all columns, on the
passenger name column.
3. Query performance on the Reservation table of the TravelOnLine database is less than
optimal. As a solution, you decide to partition the table so that queries on the current
and future reservations are quickly returned. Which of the following is the best choice
for the partition column?
a. Reservation date column
b. Reservation status column
c. Reservation airline column
d. Reservation agent column
4. You have two tables in the HR database, HR.EmployeeName and HR.EmployeeAddress,
with columns as follows:
HR.E MPLOYEE N AME

HR.EMPLOYEEADDRESS

EmployeeID

AddressID

LastName

EmployeeID

FirstName

Street

Title

City

Social Security Number

ZipCode

City

State

Based on the previous information, which is the best choice to be a foreign key?
a. City column in HR.EmployeeName
b. City column in HR.EmployeeAddress
c. EmployeeID in HR. EmployeeName
d. EmployeeID in HR.EmployeeAddress
5. You have been told that the MeritIncrease column should be configured so that no
employee receives less than a 2 percent merit increase and no one receives more than an
8 percent increase. What do you do?
a. Create a default constraint set to 2 percent.
b. Create a check constraint that allows for data ranging from 28 percent.
c. Create a foreign key relationship between MeritIncrease and Salary.
d. Create a unique constraint.

Designing a Physical Database | 193

6. Under what circumstances must a computed column be persisted? (Choose all that
apply.)
a. The computed column is used as a partitioning column of a partitioned table.
b. The column references a CLR.
c. The computed column is used as a primary key.
d. The computed column is used a check constraint.
7. Which of the following are true about the differences between clustered and
nonclustered indexes? (Choose all that apply.)
a. Up to 249 clustered indexes are allowed per table.
b. Nonclustered indexes are designed for columns that are searched for single values.
c. Clustered indexes are best used on columns with low selectivity.
d. Both physically rearrange the data in the table to conform to their constraints.
8. You decide to create a view of the OnLineReadOnly database to show the current reservation status based on passenger name. The view joins tables from across servers. This is
an example of what kind of view?
a. Partitioned view
b. Standard view
c. Indexed view
d. Constrained view
9. Which of the following effects does normalizing a database object, such as a database or
table, have on indexing?
a. Faster sorting and index creation
b. Large number of clustered indexes
c. Narrower and more compact indexes
d. All of the above
10. You have two tables in the HR database, HR.EmployeeName and HR.EmployeeAddress,
with columns as follows:
HR.E MPLOYEE N AME

HR.EMPLOYEEADDRESS

EmployeeID

AddressID

LastName

EmployeeID

FirstName

Street

Title

City

Social Security Number

ZipCode

City

State

You need to ensure that there are no duplicate values in the Social Security Number
field. How should you do that?
a. Add a default constraint to the field.
b. Add a unique constraint to the field.
c. Make the Social Security Number field a primary key.
d. Make the Social Security Number field a foreign key.

Creating Database
Conventions and
Standards

LESSON

L E S S O N S K I L L M AT R I X
TECHNOLOGY SKILL

EXAM OBJECTIVE

Create database conventions and standards.

Foundational

Define database object-naming conventions.

Foundational

Define consistent synonyms.

Foundational

Define database coding standards.

Foundational

Document database conventions and standards.

Foundational

Create database change control procedures.

Foundational

Establish where to store database source code.

Foundational

Isolate development and test environments from


the production environment.

Foundational

Define procedures for moving from development


to test.

Foundational

Define procedures for promoting from test to


production.

Foundational

Define procedures for rolling back a deployment.

Foundational

Document the database change control procedures.

Foundational

KEY TERMS
camelCase: A method or
standard for naming objects.
With camelCase, all characters
are lowercased except the first
letter of component words
other than the first word. An
example of camelCase would be:
customerAddress.
convention: A convention is a set
of agreed, stipulated, or generally

194

accepted norms or criteria, often


taking the form of a custom.
method: A specific means of action
to accomplish a stipulated goal
or objective.
PascalCase: A method or standard
for naming objects. With
PascalCase, all characters are
lowercased except the first letter

of each component word. An


example of PascalCase would be:
CustomerAddress.
standard: A standard establishes
uniform engineering or technical
criteria, processes, and practices
usually in a formal, written
manner.

Creating Database Conventions and Standards | 195

If you have any experience with databases, the need for and value of naming conventions,
particularly in an enterprise setting, should be both self-evident and axiomatic. In fact,
you may wonder why this book needs to have a Lesson on the obvious. If you have little
or no background, then you may consider this Lesson a primer in becoming a punctilious
nitpicker with a tendency toward anal retentiveness and rigidity of thought. You may also
want to know, Whats the big deal about how things are named and what standards are
applied? The results are all thats important.

The answer is simple. Having database conventions and standards offers a method of
organizing the server infrastructure as well as increasing productivity and the effectiveness
of the database administrator and development teams. Good standards that are consistently
applied grow in usefulness over time because they help make even unfamiliar databases easier
to understand. Because its unlikely that youll be working alone, devising and creating database conventions and standards should be a team effort. The standards should be good, workable, and something your team members agree with.
Finally, although its easy to think up naming conventions and coding standards, they must
be durable enough to survive changing circumstances. Its difficult to modify conventions and
standards and apply them retrospectively to existing databases because of the impact doing
so can have on applications and security. Flexibility and the ability to adapt to changing (and
unforeseen) circumstances for standards and conventions are crucial to how successful they
are. A good example in the non-IT world is the U.S. Constitution. A mere four pages long,
its both the shortest and longest-lasting constitution in the world. The genius to its longevity
and effectiveness is its flexibility and ability to adapt to circumstances not even dreamt of by
its original authors.

Understanding the Benefits of Database Naming Conventions


THE BOTTOM LINE

When designed correctly, a database naming convention lets database developers and
administrators easily identify the type and purpose of any object in a database system.
Its important to create a consistent and meaningful naming convention for a database server
infrastructure. Applying a single, consistent standard for the entire infrastructure, even if you
have to implement it in steps, will reduce the time and associated costs when developers start
using a new database. It will also simplify the task of managing a larger number of databases.

TAKE NOTE

Database naming conventions are typically product specific. What constitutes a valid name
or good practice in one database management system may be invalid or bad practice in
another. If youre using SQL Server with other database management systems, youll
probably need to create a naming convention that spans each system. Similarly, if youre
migrating from a different database management system to SQL Server, youll likely need
to adapt the names used by migrated objects to conform to SQL Server best practices.

Some benefits of establishing a database naming convention include the following:


Personnel who use or maintain the database can easily identify an objects purpose, type,
and function.
Database naming conventions let you integrate new developers into the development
team quickly and easily. The learning curve can be shortened because good naming
conventions can make database code easier to read and understand.

196 | Lesson 9

Despite the tangible benefits of naming conventions, there are still those who think that the
need to establish them doesnt apply to their circumstances. The arguments tend to fall into a
couple of categories:
Our team (or the company) is small, so adopting and enforcing a naming convention
is unnecessary administrative overhead. The problem with this argument is that the
smallness of the organization is what calls for a naming convention. Without such a
convention, dependencies on particular team members are likely to develop. Similarly,
depending on an individuals memory means youll inevitably lose some critical knowledge if a team member moves on. Naming conventions and standards can minimize
that loss.
There isnt time for new team members to learn current conventions. This is a false
economy usually argued for by a shortsighted manager. There is an old proverb, Give a
man a fish, and he eats for a day; teach him to fish, and he eats for a lifetime. By applying the proverb here, the time spent understanding how a naming convention works can
save considerable time later.

Establishing and Disseminating Naming Conventions


You should establish a convention for naming all the major types of database objects and
provide documentation for all staff responsible for creating or maintaining databases and
database applications. You should learn how to avoid common pitfalls and dangers with
naming conventions. The following sections cover all these topics.

PROVIDING NAMING CONVENTIONS FOR DATABASE OBJECTS


Theres no correct way to establish naming conventions for database objects. There are
many approaches. You can, for example, use standard prefixes or suffixes based on the
type of objects. Or, you can adopt a set of conventions for naming the bodies of database
objects.
Its common and useful to prefix constraints to identify the object type. For example, many
database designers use PK_ for primary keys, CK_ for check constraints, and FK_ for
foreign key constraints. Similar conventions for other database objects include usp for stored
procedures, ufn for user-defined functions, vw_ or v_ for views, and so on.
Another common practice is to give the body of a stored procedure a name that reflects the
stored procedures function. For example, a stored procedure that gets a list of salesmen from
the Personnel table might be called uspGetSalesmen.
With other objectsfor example, indexesits common to name the object by using the
name of the table followed by the name of the columns in the index. For example, a nonclustered index over the CustomerID column of the CustomerOrder table might be called
IX_CustomerOrder_CustomerID.
For tables, the convention frequently used is the singular name of the entity that the table
represents, such as Employee, Product, or CustomerOrder.
Table 9-1 describes some common naming conventions for database objects.

TAKE NOTE

When faced with existing database objects that are poorly named but cant be renamed, you
can use synonyms as alternate names that are more descriptive. For example, if you have a
table named OrdNm that holds order name data, you should consider defining a synonym
called OrderName. You can then reference the table through this synonym until you can
rename the table.

Creating Database Conventions and Standards | 197


Table 9-1
Summary of database objects
and typical naming
conventions

D ATABASE
O BJECT

CONVENTION

Tables

Tables typically represent entities such as Customer or Order. Its best to use
the name of the entity that the table represents for the name of the table
because the name should be both accurate and descriptive. Use singular
names whenever possible.

Columns

Columns describe attribute data values, and you should try to retain the
same meaningful name for each column in the database. For example,
use LastName for a column holding the last name of an employee in the
Employee table. Using descriptive names makes your SQL code more readable.

Views

Views typically join several tables or other views together to generate or


summarize information. Use names that indicate the purpose of the information they return. Its common to use a standard prefix such as vw_ for
view names to distinguish them from tables. For example, vw_YearlySales
PerSalesRegion could be the name of a view returning yearly sales grouped
by sales region.

Stored
procedures

Stored procedures express actions. You should use a meaningful name combining verbs and objects that describe their action. To avoid confusion with
system-stored procedures, dont use the sp_ prefix; consider using usp instead.

User-defined
functions

User-defined functions calculate values. As with stored procedures, use meaningful names that describe the calculations the functions perform. A common
convention is to prefix the name with ufn to distinguish user-defined functions from columns or views in SQL statements. For example, ufnCalculateSalesTaxDue could be the name of a user-defined function that calculates the
sales tax due for a transaction.

Triggers

Triggers perform an automatic action when an event occurs on a table. You


should combine the name of the table and the trigger event type. For example, a trigger called dOrder might handle the delete event on the Order table,
and the uOrder trigger might handle the update event on the Order table. You
can also indicate whether the trigger is an AFTER or INSTEAD OF trigger by
including After or InsteadOf in the namefor example, dAfterOrder.

Indexes

Index names commonly combine the name of the table and the names of the
columns, and they frequently include a prefix such as IX_. For example, the
index IX_Employee_ManagerID column might span the ManagerID column in
the Employee table. You can augment the prefix to indicate whether the index
is clustered or nonclustered, a unique index, and so on. An advantage is that
the index names become self-documenting. However, this approach can result
in lengthy names. Normally this isnt a problem because youre unlikely to
refer directly to the name of an index in your applications or SQL commands.

Constraints

Constraints specify rules to which data in a column or set of columns in a


table must conform. Its best to name a constraint after the rule it enforces
or the column it operates on. You can also add a prefix indicating the type of
constraint (check, primary key, foreign key, unique constraint, and so on). For
example, the check constraint CK_Employee_MaritalStatus might validate the
data in the MaritalStatus column in the Employee table as its entered.

Schemas

Use schemas to group database objects by functionality and to partition


objects into protected domains. One danger is that it can be easy to confuse
schema names with tables. For example, in the AdventureWorks database
provided with SQL Server 2005, Sales is the name of a schema. However,
many databases also have a table called Sales. Its more effective and less
confusing to add a prefix that identifies a name as a schema. For example,
you could use schSales to represent a schema and Sales to represent a table.

198 | Lesson 9

CERTIFICATION READY?
Make sure you know
where SQL Server looks
for stored procedures
and the search order.

AVOIDING PITFALLS AND DANGERS WITH NAMING CONVENTIONS


You should exercise caution when developing your infrastructure-wide naming conventions
because you may get only one shot at it. Similarly, because its difficult to modify production
systems, any bad naming habits that an organization follows can be very long-lived. This can
become a classic case of a self-perpetuating error that only a computer can excel at, and youll
be responsible for it. Furthermore, following bad practices can slow development and can
result in unexpected behavior by a database or its contents.
RECOGNIZING BAD PRACTICES
If youre not careful, database naming conventions can lead to problems. While there are no
hard-and-fast rules beyond consistency and using conventions that make sense in your context and are easily transferable to other situations, there are some mistakes that you should
work to avoid.
Microsoft has identified a number of conventions and practices that it considers bad
practice and you should make sure that you avoid them.
Using the sp_ prefix in user-defined stored procedure names
If the sp_prefix is used for a user-defined stored procedure, it causes SQL Server to search
the master database first and then the local database. SQL Server will stop searching when it
finds the first stored procedure that matches the name it is looking for. As a result, the master
database stored procedure will be executed if its marked as a system-stored procedure, and the
local database stored procedure will not be executed.
Another problem is identification. Using the sp_ or sp prefix makes it difficult to tell the difference between your own stored procedures and the system-stored procedures that come with
SQL Server.
The best practice, according to Microsoft, is to label user-defined stored procedures with the
usp_ prefix.
Using uppercase and lowercase inconsistently
It really doesnt make a difference how you use upper- and lowercase. You can use them alone,
separately, or mixed. The latter can be useful because it gives visual cues about where key
parts of the object name begin and end, especially with compound names. Two examples of
common capitalization conventions are PascalCase and camelCase. Examples of PascalCase
include such names as OrderDetails or CustomerAddresses. Examples of camelCase include
names like myAddress and vendorTerms.
Again the choice is really up to you, but the worst thing you can do is be inconsistent. This
problem becomes a disaster if you install a database or an application on a case-sensitive
server, causing operations to fail that dont exactly match the case usage of an identifier.
By commonly accepted convention, SQL key or reserved words are usually expressed in all
uppercase text while object names are primarily expressed in some form of lowercase text.
Using spaces or nonalphanumeric characters in object names
In a word, dont, unless of course you like to overcomplicate things and use extra keystrokes.
Using spaces complicates code and forces you to use delimiters around identifiers, or doublequote marks around table and column names. Microsoft recommends the use of the underscore (_), as a word separator. Mixed cases can also help.

Creating Database Conventions and Standards | 199

Naming tables with the tbl prefix


In a word, dont. While Microsoft Access database developers commonly use the tbl prefix,
the presence of table names in the FROM clause of a SELECT statement in SQL Server
makes the table names unambiguous.
Including a datatype abbreviation in a column name
CERTIFICATION READY?
Suppose a query
references a table named
Orders and a stored
procedure references a
table named ORDERS in
the same database. Are
these the same object?
What if, instead, there
were columns in three
different tables named
OrderNumb, OrderID, and
OrderNo. What do you
think of those different
names?

The biggest problem with following this convention is the maintenance cost. For example
when you change a columns datatype, you have to change the column name or else invalidate
the convention. Keeping up with this purely arbitrary naming convention adds no valuea
clear case of the juice not being worth the squeeze.
Using short or abbreviated object names
There is no reason to stick to obscure and cryptic short names any longer. The point of naming something is to identify it, not cause you to play a guessing game.
Using reserved words as object names
This is not only bad practice, but also rife with possible disaster. Using reserved words for
object names means that you constantly have to delimit identifiers with square brackets or
double quote marks. This makes your code difficult to read, and again, for no good reason.
The possibility also increases that the now-difficult-to-maintain SQL commands may fail.

ENCOUNTERING VENDOR NAMING CONVENTIONS


Vendors and contractors may be an unexpected problem and potential pitfall. Naming conventions
defined by a vendor may conflict with your organizations naming conventions. In that instance,
staff members from your organization will need to learn the vendors naming standards if theyre
responsible for maintaining a vendor-supplied system. Although this isnt necessarily a bad thing, it
isnt uncommon for some of the vendors standards to be accidentally applied to your system and to
start coexisting with your naming standards. This is something you need to guard against.
If your organization is outsourcing database development work, its recommended that
you devise good naming standards and conventions for the contractor to apply. Otherwise,
you may find yourself with a contractor-supplied database design that is at odds with your
practices and that is difficult to maintain.

DOCUMENTING AND COMMUNICATING DATABASE NAMING CONVENTIONS


If you dont already document work on and about the database infrastructure as a matter of
course, you should. A critical task in the creation and implementation of naming conventions
and standards is to document those youve adopted. Conventions can evolve over time, so its
important to keep such a document concise, clear, and up to date. In some cases, you may
need to customize the document for a specific project.
Another obvious need is to distribute database conventions and standards to all staff members
who need that information. Establish mechanisms to ensure that all database developers, administrators, and testers in the organization can access the latest version of the document. You can
use tools such as Microsoft SharePoint Portal Services to share these documents and keep control of document versions. Or, you can post them on you organizations intranet. Either way, the
key is to make sure that conventions and standards are disseminated and enforced.

LAB EXERCISE

Perform Exercise 9.1 in your lab


manual.

If youre engaging external vendors, contractors, or even another department or branch of


your organization in a database project, make sure the people involved know about and are
required to follow naming conventions. Establish mechanisms to check for naming convention compliance by partners. These can include random reviews or having staff dedicate time
to the task as part of a quality assurance process. The time you spend double-checking can
save you much more time later.
In Exercise 9.1, youll examine and evaluate the object names in the AdventureWorks database. In this exercise, youll look at the names of objects in the AdventureWorks database that
ships with SQL Server 2005. Youll provide examples of good (and in some cases bad) naming
practices that have been followed.

200 | Lesson 9

Defining Database Standards


THE BOTTOM LINE

Just as you need to establish naming conventions, you need clearly defined database
standards. These standards cover T-SQL coding, database access, and change deployment.
In this section, youll examine the why and how of database standards and learn some basic
ways of creating and managing standards.
Ironically, the need for standards is an inevitable result of the flexible and freewheeling way in
which development has grown. As software developers created more and better database programs,
with maximum flexibility and an open invitation to innovate, they sowed the seeds of confusion.
Developers can now use many different techniques for accessing databases. They can document their code in a number of different ways (assuming they document it at all). At the same
time, different teams can deploy databases and applications to the production environment in
a variety of ways. The problem with all this creativity and inventiveness is that it has unleashed
a form of documentation anarchy. When different developers and teams follow their own
individual practices, they can end up creating code and databases that are difficult to maintain.
Similarly, letting different groups deploy applications and databases in uncontrolled ways can
lead to chaos, possibly resulting in security failure, if not complete system breakdown.
Having no infrastructure standardsor, worse, having them and not enforcing themis an
invitation to inconsistent behavior in the database and its application as well as development
of old-fashioned, poor-quality applications.
A key activity in designing your infrastructure must include database standards that are
clear, sensible, and enforced. Defining and using standards will alleviate many problems and
provide a number of benefits. For example, if you require developers to follow a standard
technique for accessing and manipulating databases, the result should be code that a different developer can maintain with a minimal learning curve. At the same time, youll be more
confident of the quality of the applications being built.
In addition, defining database standards can help your team, department, or organization operate more systematically and can reduce the time it takes to learn new systems or move from one
system to another. Defining a standard process for deploying databases and database applications
reduces the scope for errors, minimizing the likelihood of system failure and security breaches.
Any list of database infrastructure standards is necessarily incomplete because every organization has its own unique needs. As with naming conventions, database infrastructure standards
tend to be developed or enhanced by the organization that uses them, so there is no such
thing as an exhaustive list.
However, there are general standards that are nearly universal. The following sections describe
the types of standards you should consider defining.

Transact-SQL Coding Standards


The first step you should take is adjusting any preconceived notions you have about
Transact-SQL (T-SQL) code and thinking of it as true source code. Database T-SQL
code such as stored procedures, triggers, and scripts is the most common means of
implementing critical portions of database applications. When dealing with T-SQL, you
should use source-code control and enforce standards for good coding practices. You
should also ensure that all developers apply the appropriate coding standards when
performing code reviews.
T-SQL coding standards should cover a wide range of functional areas, including transaction
and error handling, stored procedure unit testing, and debugging mechanisms. Standards
should stipulate good commenting and stylistic practices, making stored procedures, functions, views, T-SQL statements, and any T-SQL code items easy to understand and maintain.

Creating Database Conventions and Standards | 201

DEFINING T-SQL STANDARDS


When defining T-SQL coding standards, you might want to follow these common
recommendations:

LAB EXERCISE

Perform Exercise 9.2 in your lab


manual.

Use templates for each type of object, such as stored procedures, user-defined functions,
views, and triggers. The templates usually contain predefined code that guides developers
through the items they should implement. Templates can also contain boilerplate areas
for descriptions, the author, the date of creation, and a log of changes and reasons for
the changes.
Adopt the following stylistic standards:
Prefix every reference to a database object with the name of the schema it belongs to.
Indent every block of code appropriately.
Use uppercase letters for all SQL and SQL Server keywords.
Apply the following functional standards to database code objects, whether based on TSQL or managed code:
Ensure that code in triggers can handle multiple inserts, updates, or deletes, not just a
single row.
Never use T-SQL user-defined functions (UDFs) to perform searches on other tables by
executing a lookup for some value based on a key. This use of UDFs can result in poor
performance if a UDF is used as part of a SELECT query that returns many records.
Avoid using cursors inside stored procedures. Cursors are exceptionally poor replacements for set-based queries and should be used only when absolutely required.
Require that stored procedures avoid creating and using temporary tables unless they
improve performance.
Employ TRY . . . CATCH constructs to perform error handling. This helps simplify the
logic of a T-SQL block and avoids the use of @@ERROR functions in repeated tests.
In Exercise 9.2, youll use Template Explorer to use an existing template for T-SQL code.

DOCUMENTING, DISSEMINATING, AND REVIEWING CODING STANDARDS


As with naming conventions, you should make proper and detailed documentation of T-SQL
coding standards one of your most critical job functions. Good documentation makes it easier
for new database developers and database administrators to adapt to the practices adopted by
your organization.
Having a coding standard makes performing code reviews much easier. Code that follows a good standard will be correctly aligned, will have the same degree of comments and
documentation, and will be easier to read than code formatted using a helter-skelter style.
Reviewers can concentrate on issues such as the suitability of the algorithms used and verifying that code solves the problem for which its designed, rather than trying to follow it.
Include standards related to what developers should and should not do with regard to T-SQL
code. Often there are multiple ways to accomplish an action. If you specify the desired way
as part of the standards documentation, you have a much better chance that developers will
adhere to this standard.
To disseminate the standards, you can use a communication portal tool such as SharePoint
Portal Server. Similarly, you can use a shared drive or a company intranet site.

Defining Database Access Standards


Databases can be accessed in a number of ways, so defining a standard mechanism for
accessing a database makes it simpler to enforce best practices and maintain security. For
example, you can specify that all data access be done using stored procedure calls from
a client application or middle-tier components. Doing so allows you to modify the
database schema or tune queries without needing to modify client or middle-tier code.

202 | Lesson 9

Another reason to develop database access standards is that when applications access databases
in a wide variety of nonstandard ways, it becomes much more difficult to optimize systems,
trace connections when identifying performance problems, and enforce security best practices.
The lack of a data access standard can also needlessly increase the complexity of the deployment process, adding an unnecessary level of fragility to an application (and the database).
Prudence dictates that, as with most infrastructure activities, you should develop a set of standards or rules for accessing databases that you can apply to your entire infrastructure.
The first question you should consider is whether you want to allow users and applications to
access the data in a database directly or only indirectly.

DIRECTLY ACCESSING THE DATABASE


Permitting direct access to database data results in a tight coupling between the SQL commands that an application uses and the database schema. If you modify the tables or views in
the database, youll probably need to modify the application as well. Similarly, if you want to
tune the queries used by an application, youll probably have to change the application source
code and then redeploy the application. As a result, it isnt normally advisable to allow direct
access to the data in a database.
You can use at least two mechanisms to implement indirect access to the data in a database:
You can specify that applications must use stored procedures, or you can restrict all data
access to use views.

INDIRECTLY ACCESSING THE DATABASE THROUGH STORED PROCEDURES


Using stored procedure to access the database has many advantages:
Applications arent tightly coupled to the database schema and dont rely on a fixed
structure of tables and columns. You can modify the structure of tables without affecting the application; all you need to do is update the stored procedure to use the new
schema, and it will then return the same results and take the same parameters.
Stored procedures can be used to shield operations that may expose sensitive data that
should be hidden from the user or application.
Its easier to optimize and tune queries without affecting or needing to modify applications that use the stored procedure.
This method can reduce network traffic by encapsulating logic in the server rather than
the client applications. Note that SQL Server can be used to generate, optimize, and
reuse the same query execution plans when the same stored procedure is used repeatedly.

INDIRECTLY ACCESSING THE DATABASE THROUGH VIEWS


You can create a view for each table and provide access to the views rather than to the underlying tables. You can also design views that join tables or generate summary data. Some of the
advantages of using views to access data include the following:
Views can hide complex SQL logic from applications and reduce the coupling between
an application and a database, enabling you to modify the underlying tables without
requiring that you change the application.
Views can be configured to be selective about the information they make available to
end users and applications, based on the identity of the end user or application using
the views.
Applications can be selective in the data that they retrieve. This differs from using
a stored procedure, because when using a view, you can configure an application to
retrieve only the columns required to implement functionality. For example, an application that displays the hire and termination dates of employees doesnt have to retrieve the
salaries of employees, even if this data is present in the view.
One place where stored procedures are a better option than views is in reducing network
traffic. The amount of logic you can encapsulate in a view is limited when compared to that

Creating Database Conventions and Standards | 203

available in a stored procedure. To compensate, you would need to place more of the logic in
client applicationsnot necessarily the best approach.

DOCUMENTING AND COMMUNICATING DATABASE ACCESS STANDARDS


As before, and with all standards, documentation and dissemination are crucial tasks. You
should ensure that all developers and designers responsible for building database applications
are aware of the data access standards, and, of course, enforce them.
One way of ensuring enforcement, if youre responsible for assuring database access standards
are followed, is to validate all applications using a database and ensure that developers have
followed the appropriate data-access standards before allowing the application to be deployed
in a production environment. To disseminate the standards, you can use a communication
portal tool such as SharePoint Portal Server. Similarly, you can use a shared drive or a
company intranet site.

Deployment Process Standards


In this section, standards for deploying and coordinating changes to database structures
and the matching application code are discussed. Normally both database structure
and code are mutually dependent on each other and deploying even small changes can
require careful coordination of the steps in the deployment process.
By its nature, no activity is more complex than application deployment. Logistically, its a
complicated process, often involving several teamsa breeding ground for snafus. Having
deployment standards helps reduce the complexity of deployment and clarifies procedures
when the unexpected happens.
Similarly, an application development life cycle can be a long and complicated process, with
the goal of deploying the application to a production environment. Databases are often handled independently. For example, a single database may serve multiple applications and have
its own development life cycle.
Consequently, you cant create standards for database deployment in a vacuum. You must
define them in conjunction with standards for application deployment. Just as the application
deployment process requires a good, well-documented, and well-tested deployment plan, so
does the database deployment process.
The process of deploying a database has some unique features that require special attention to
ensure that the database will meet the necessary quality and reliability standards after deployment to the production environment. Its important to remember that deploying a modified
database is considerably different from deploying a modified application. Unlike an application, you cant replace a database with a newer version. Instead, as part of the deployment process, you must ensure that the contents (such as data and database objects) are transformed and
transferred as well. Youll likely have to create scripts that update the structure of a database
and make the appropriate changes to the data. Finally, you must provide a way to roll back the
changes and revert to the previous version of the database if the deployment fails.
The following sections discuss some guidelines for developing database deployment standards.

DEFINING THE ROLE OF DEVELOPMENT, TESTING,


AND PRODUCTION DATABASES
As the first step, make sure you clearly distinguish the role and location of development, test,
and production databases. Normally, these are stored on different servers.
Developers should develop only using the development database. When a developer creates
scripts that build or modify the development database, the scripts should initially be tested
on the development server. When development is complete, then and only then should the
scripts be transferred to the test server and used to construct or update the test database.
If testing fails at any point, developers update the scripts in the development environment

204 | Lesson 9

before sending them back for testing. When testing has been completed, the same scripts are
then used to build or update the production environment.
To ensure the validity of the development and test environments, its most efficient to build
the development and test databases from backups of the production database (making sure
to protect or remove any sensitive data if applicable). You should utilize a source-control system to maintain the latest versions of table schemas, stored procedures, and all database code
objects. The database source code should be versioned and labeled following the style adopted
for the overall application development project. Save all deployment scripts, including those
implementing schema changes and data modifications, in the same source-control system.

DEFINING METHODS FOR PROTECTING PRODUCTION DATA


DURING THE DEPLOYMENT PROCESS
Your highest priority is to ensure the integrity of the production database. (Consider having
that sentence engraved on your keyboard!) Make no mistake; losing a production database
because of sloppy handling can be a career ender.
How do you do that? There are many methods, but using the following tips as appropriate
should keep you safe from an egregious error:

REF

Lesson 11 covers backup


in detail.

Allow only production database administrators to access the production database.


Doing so creates responsibility and ownership. And using only database administrators
familiar with the production environment will probably reduce the number and type
of errors.
Make changes to the database only by using T-SQL scripts. You can use the SQLCMD
tool to run T-SQL scripts from the command line or from command scripts. You can also
parameterize T-SQL scripts. Doing this lets you make sure youre running the same commands in the development, test, and production environments and for repeating the changes if necessary. You should perform thorough unit and integration testing of these scripts.
Back up all affected databases before deployment. Its easy to forget a step as obvious
as this under the stress of a deployment operation, so make sure you explicitly document
it as a step in the deployment process. (Engrave this one on the keyboard as well.) Doing
so can save you; its almost a given that databases crash only when theres no backup.
Have a rollback plan. If the deployment fails, you dont want to be improvising on the
fly. Believe it or not, acting without a plan usually makes the problem worse. Rollback
may be a simple matter of restoring the database from a backup. However, if new data
has been added that must not be lost, you need to have developed and tested scripts to
perform the rollback operation that also preserve the data.

DEFINING THE ROLES AND RESPONSIBILITIES OF STAFF


Your deployment plan should clearly indicate what staff member or role is responsible for executing each step. The documentation should also clearly specify the sequence of steps and include
a decision tree detailing the options available for each step depending on whether it succeeds or
fails. The plan must also specify how long the deployment process will take and any dependencies
that other production systems have on the database. If the deployment requires a period when
the database is unavailable to production, make sure the deployment plan document identifies
the staff members who should be notified as well as how long the system will be unavailable. Its
best to schedule deployments that include service unavailability during off-peak hours.
RECORDING CHANGES IN A RUN BOOK
A run book logs all actions taken by a database administrator that affect production databases.
This includes anything that modifies the database or server configuration and any specific
changes, based on user requests, made to data.
The run book gives you a precise record of all the changes youve made to the database and
the date and time of each change. This will help you to reproduce or undo these changes if
necessary. You can also use the run book to ascertain whether an error was caused by your
actions or an external event.

Creating Database Conventions and Standards | 205

Its best to keep your run book as a document in your source-control tool, so that you have
access to all versions.

Database Security Standards

REF

For more information


about designing and
implementing security
policies for SQL Server,
see Lessons 4 through 7.

Database security is an extensive and important topic. Setting standards for database
security can help reduce that complexity across your entire infrastructure. For example,
you may decide to require that all users log in to SQL Server by using Microsoft
Windows authentication, thereby enforcing the same level of security for database users
as at the network level. Although you should be aware of the need to set database security standards, detailed discussion is beyond the scope of this lesson.

S K I L L S U M M A RY
In this lesson, you learned about the importance of naming conventions and database control
standards as part of an effective infrastructure. You learned how to design a flexible naming
convention system that maximizes effectiveness. You learned naming convention best practices.
You also read about bad naming practices and how to avoid them. If youve ever questioned
the value of a naming convention system, that doubt has been laid to rest.
You also learned that a naming convention system that is inflexible is as valueless as one that
doesnt exist. You reviewed some methods, such as synonyms, for dealing with existing databases
that have no or poor naming conventions.
You examined how defining database standards can help your team, department, or organization
operate more systematically and reduce the time it takes to learn new systems or to move from
one system to another. You learned techniques for defining standard processes to deploy databases and database applications and how doing so minimizes the chance for errors and potential
of system failure and security breaches. You learned about run books and how to use them when
deploying and maintaining your databases and applications.
You also read about best practices for coding and database access standards and how they
integrate with the security standards that you learned about in other Lessons.
For the certification examination:

Understand the benefits of naming conventions. Its important to know how naming
conventions work and how to develop a flexible set of naming standards.

Understand the common bad naming practices. Just as its important to know what a
good naming convention is, you should also know the most typical errors and how to
avoid them.

Be familiar with T-SQL coding standards. Understand what makes a good T-SQL code
standard, what common errors to avoid, and how to use Template Explorer to minimize
the risk of error.

Be familiar with database access standards. Understand the best practices for database
access and how to define them as standards.

Be familiar with database deployment standards. Understand the best practices for database
deployment and how to define them as standards. Understand how to plan deployment.
Make sure you understand the value of assigning roles and how to do so effectively. You
should also be aware of the various functions of development, test, and production servers
and how and when to use them. Be aware of how to protect data during deployment or
changes, as well as when and how to plan rollbacks in the event of a deployment failure.

206 | Lesson 9

Knowledge Assessment
Multiple Choice
Circle the letter or letters that correspond to the best answer or answers.
1. Which of the following are benefits of having database naming conventions? (Choose all
that apply. )
a. Provides a method to organize infrastructure
b. Reduces the learning curve for new database administrators
c. Makes coding easier
d. All of the above
2. Which of the following are the most important attributes of a naming convention?
(Choose all that apply.)
a. Flexibility
b. Regulatory requirements
c. Consistency
d. Size of the organization
3. Which of the following database objects should have a naming convention? (Choose all
that apply.)
a. Database
b. Table
c. Trigger
d. Index
4. Which of the following practices should not be followed?
a. Prefixing a view with vw_
b. Prefixing a stored procedure with sp_
c. Using prefixes with schema
d. Using the prefix ufn to define a user-defined function
5. Which of the following are good naming practices for indexes? (Choose all that apply.)
a. Combine the name of the table and the names of the columns.
b. Specify whether the index is clustered or nonclustered.
c. Include a prefix such as IX_.
d. Use spaces to separate key elements.
6. When you have an existing database with poorly named objects that cannot be renamed,
what is the best way improve clarity of the naming conventions?
a. Use a lookup table.
b. Create a new column.
c. Note in your standards documentation what the poorly named object actually
represents.
d. Use a synonym.
7. Which of the following is not a bad practice for naming conventions?
a. Using the sp_ prefix in user-defined stored procedure names
b. Inconsistent use of uppercase and lowercase letters
c. Using numbers in the name
d. Using reserved words for object names
8. Which of the following are not recommended names for tables in a SQL Server
database? (Choose all that apply.)
a. Person.Address
b. Person.Address Type
c. tbl_Person.AddressType
d. dbo.MSmerge_history

Creating Database Conventions and Standards | 207

9. Consider the following trigger name found in the AdventureWorks database:


ddlDatabaseTriggerLog. Which of the following characteristics does it have? (Choose
all that apply.)
a. Proper use of uppercase and lowercase letters
b. Proper use of a prefix to indicate the type of operation performed
c. Proper use of the word Trigger in the name
d. Proper use of alphanumeric characters
10. Consider the following index name found in the AdventureWorks database: AK_
BillOfMaterials_ProductAssemblyID_ComponentID_StartDate. Which of the following
good naming convention practices are followed in this name? (Choose all that apply.)
a. The index name includes a prefix indicating its type.
b. The index name includes the name of the original table.
c. Names of objects are separated by an underscore.
d. All of the above.
11. Which of the following is a useful tool available in SQL Management Studio for developing T-SQL code for database objects?
a. Business Intelligence Development Studio
b. Template Explorer
c. Object Explorer
d. Solution Explorer
12. In T-SQL code, you should adopt which of the following stylistic standards? (Choose all
that apply.)
a. Prefix every reference to a database object with the name of the schema it belongs to.
b. Indent every block of code appropriately.
c. Use lowercase for all SQL and SQL Server keywords.
d. None of the above.
13. You should apply which of the following functional standards to database code objects
whether based on T-SQL or managed code? (Choose all that apply.)
a. Ensure that code in triggers can handle multiple inserts, updates, or deletes, not just a
single row.
b. Use T-SQL UDFs to perform searches on other tables by executing a lookup for
some value based on a key.
c. Use cursors inside stored procedures.
d. Require that stored procedures avoid creating and using temporary tables unless they
improve performance.
14. Which of the following are true regarding allowing users and applications direct access to
data in a database? (Choose all that apply.)
a. It results in a tight coupling between the SQL commands that an application uses
and the database schema.
b. It improves database security.
c. Modifying tables or views in the database will likely require modification of the
application.
d. It streamlines troubleshooting.
15. Which of the following mechanisms can be used to implement indirect access to the data
in a database? (Choose all that apply.)
a. Triggers
b. Indexes
c. Stored procedures
d. Assemblies

208 | Lesson 9

16. Which of the following are good deployment practices? (Choose all that apply.)
a. Require developers to use the test rather than the production database.
b. Do a complete backup of the production database before applying changes.
c. Utilize a source-control system to maintain the latest versions of table schemas, stored
procedures, and all database code objects.
d. Allow only production database administrators to access the production database.
17. What must be in place prior to initiating a deployment from test to production?
(Choose all that apply.)
a. Backup of the development database
b. Definition of roles and responsibilities of staff involved
c. Sequence of steps to follow
d. Rollback plan
18. What should you use to log all actions taken by a database administrator that affect a
production database?
a. Transaction log
b. Shipping log
c. Desk calendar
d. Run book
19. The word NewYorkYankees is an example of what style of casing?
a. Hungarian case
b. Reverse Polish case
c. camelCase
d. PascalCase
20. Which of the following methods and tools can you use to ensure proper dissemination of
documentation regarding naming conventions, coding standards, rollback plans, deployment sequence, and other control procedures and standards? (Choose all that apply.)
a. Network share
b. Intranet site
c. SharePoint Portal Service
d. All of the above

Designing a SQL
Server Solution for
High Availability

L ESSON

10

L E S S O N S K I L L M AT R I X
TECHNOLOGY SKILL

70-443 EXAM OBJECTIVE

Develop a strategy for migration to a highly available environment.

Foundational

Analyze the current environment.

Foundational

Ascertain migration options.

Foundational

Choose a migration option.

Foundational

Design a highly available database storage solution.

Foundational

Design the RAID solutions for your environment.

Foundational

Design a SAN solution.

Foundational

Design a database-clustering solution.

Foundational

Design a Microsoft Cluster Service (MSCS) implementation.

Foundational

Design the cluster configuration of the SQL Server service.

Foundational

Design database mirroring.

Foundational

Design server roles for database mirroring.

Foundational

Design the initialization of database mirroring.

Foundational

Design a test strategy for planned and unplanned role changes.

Foundational

Design a high-availability solution that is based on replication.

Foundational

Specify an appropriate replication solution.

Foundational

Choose servers for peer-to-peer replication.

Foundational

Establish a strategy for resolving data conflicts.

Foundational

Design an application failover strategy.

Foundational

Design a strategy to reconnect client applications.

Foundational

Design log shipping.

Foundational

Specify the principal server and secondary server.

Foundational

Switch server roles.

Foundational

Design an application failover strategy.

Foundational

Design a strategy to reconnect client applications.

Foundational

(continued )
209

210 | Lesson 10

L E S S O N S K I L L M A T R I X (continued )
TECHNOLOGY SKILL

70-443 EXAM OBJECTIVE

Select high-availability technologies based on business requirements.

Foundational

Analyze availability requirements.

Foundational

Analyze potential availability barriers.

Foundational

Analyze environmental issues.

Foundational

Analyze potential problems related to processes and staff.

Foundational

Identify potential single points of failure.

Foundational

Decide how quickly the database solution must failover.

Foundational

Choose automatic or manual failover.

Foundational

Analyze costs versus benefits of various solutions.

Foundational

Combine high-availability technologies to improve availability.

Foundational

KEY TERMS
database mirroring: A
technology for continuously
copying all data in a database
from one server to another
so that in the event that the
principal server fails, the
secondary server can take over
the processing of transactions
using its copy of the database.
failover: A switch between the
active and standby duplicated
systems that occurs automatically
without manual intervention.
Sometimes known as switchover.
high availability: The continuous
operation of systems. For a
system to be available, all
components including application
and database servers, storage
devices, and the end-to-end
network need to provide
uninterrupted service.
log shipping: A technology for
high availability that is based on
the normal backup and restore
procedures that exist with SQL
Server. In this environment,
transaction-log backups are made

on the principal server and then


copied to the secondary server.
merge replication: A method of
replication that transfers data
from one database to one or
more other databases. Data can
be changed in more than one
location. This may cause conflicts
to arise.
mirror database: The passive or
secondary database in a mirroring
configuration. Also known as the
secondary database.
principal database: The
active database in a mirroring
configuration.
principal server: A machine that
during normal operating conditions
provides the services that a service
such as SQL Server offers.
quorum: The majority of servers
in a mirroring configuration.
A quorum of two servers
determines which database
is the principal server. In a
normal situation, the principal
database and the witness
form a quorum that keeps this

primary server functioning as the


primary database in a mirroring
configuration.
secondary database: The
passive or secondary database
in a mirroring configuration. Also
known as the mirror database.
single point of failure: A
vulnerability whose failure leads
to a collapse of the whole.
snapshot replication: A method
of replication that involves
database snapshots. This form
of replication is not a highavailability solution.
transaction replication: A
method of replication that
transfers transactions from one
database to one or more other
databases. Changes to data are
not allowed on the receiving
database(s).
witness server: An optional third
server used in some mirroring
configurations to initiate the
automatic failover within seconds
of the principal server failing.

Designing a SQL Server Solution for High Availability | 211

A highly tuned, efficiently designed, well-configured database is of no use if it isnt available. The past few years have seen some huge disasters across the world, from the tsunami
in the Indian Ocean to Hurricane Katrina in the United States. Each of these has taken
its toll in many ways, many more severe than the continued availability of your database
server. However, these disasters have brought to the forefront the need to ensure that
your computer systems can survive and continue to function in the face of issues with
the primary server.
As SQL Server has matured as a product, increasing numbers of people have called for better
solutions for ensuring their databases are highly available. Microsoft has responded, expanding
the capabilities of SQL Server in this area with each version. With SQL Server 2005, there are
not only more solutions but also solutions that are easier to implement and administer.
The holy grail of availability measurements is the five nines, which corresponds to an uptime
or availability of 99.999 percent. This equates to a yearly downtime of five minutesbarely
enough time for a reboot on most servers.
Although a single server probably cant achieve this level of availability for any appreciable length
of time, using two or more servers with a technology to move data, connections, and the other
parts of a SQL Server application to another server can help you get to this level of reliability.
This Lesson looks at the four main technologies used in SQL Server solutions to achieve a
highly available database server.

Examining High-Availability Technologies


THE BOTTOM LINE

SQL Server incorporates four technologies to enable you to build a highly available solution:
clustering, database mirroring, log shipping, and replication.
Before looking at any particular high-availability solution, you should first examine the goals
of a highly available system. There are some common misperceptions as to what benefits and
capabilities a high-availability (HA) designed system brings to a particular company. As with
any technical solution, the choice of which HA technology to choose should ensure that the
business requirements for availability and cost are met.

Identifying Single Points of Failure


A single point of failure is a person, component, or process that brings down the system
when it stops working. This can be a DBA who forgets to run a critical process or a
memory chip that fails and crashes a server. The goal of high-availability systems is to
withstand a single failure and continue to function.
Your database server contains multiple points of failure, some of which can be mitigated, and
some of which cant. Suppose you install Windows 2003 and some edition of SQL Server on
your laptop computer and begin responding to client requests for a web application youve
built. Your single points of failure are as follows:
CPU. A CPU failure will crash your server.
Power supply. Most laptops have a single power supply, so its failure will crash the
system.
Disk drive. Most laptops dont contain any type of RAID technology, so a single drive
failure will crash the system.

212 | Lesson 10

Network connection. Most laptops have a single network interface card (NIC) and
a single path to connect to the network, so the NIC, cable, or switch can crash your
system.
Windows 2003. Until you implement some type of HA technology, the Windows
operating system host is a single point of failure.
SQL Server and the application. The software components of your system, subject to
patching and changes, can fail, resulting in a system crash.
You could have a single point of failure in other places, but these are the primary ones. Some
of these can be mitigatedarguably, all of them, with a technology such as clustering. Some
components (for example, the built-in laptop mouse) might cause problems if they failed but
probably wouldnt crash the system. If however you were unable to make take critical actions
due to the simple mouse failure, you have the risk that a small minor failure could facilitate
more significant problems.
The key goal of your HA system design is to eliminate as many single points of failure as
possible. This usually means designing redundant parts into the system, such as RAID drive
arrays, spare power supplies, and so on; but it can also include developing a plan for alternate
ways of running the system in the event of a disaster.
In designing your HA system, you must examine all the components, down to the cables that
connect the systems, and assess the impact of any particular piece of equipment failing. In
building the HA system, you should have a way to mitigate any of these failurespreferably,
an automated response.
You should also think creatively about related parts of your system. Consider patches and
upgrades, staff, vendor resources (such as your Internet connection), and more to ensure that
every part of the system, from server to client, has as few single points of failure as possible.

Setting High-Availability System Goals


Although each of these technologies works in a slightly different way, the goal for all of
them is to ensure that your data can be accessed almost all the time. This implies that the
components of SQL Server that are likely to fail wont affect an applications ability to
query and change the data. The HA technologies built into SQL Server dont necessarily
guarantee that any particular hardware component or even Windows host will continue
to function, but rather that the services provided by SQL Serverthe ability to access
datawill continue to be available to clients. This goal should be accomplished in
tandem with preventing the loss of any data. Usually, this requires synchronization of
the data between various copies that exist on different systems.
TAKE NOTE

An HA system is often
referred to as having no
single point of failure,
meaning that any one
component that fails
wont affect the ability
of the database server
to function.

In setting your goals, be sure youre meeting the needs of your organization and not just
building an HA system focused on uptime. The cost of the solution, whether automatic or
manual failover is required, and the impact on the finances of the company based on ROI are
all factors that should be incorporated into your design goals.
The machine that provides the services that SQL Server offersaccess to data, messaging
queues, and so onis generally referred to as the principal server. This is the Windows host
that is running SQL Server and to which the clients connect. Any servers that are set up and
ready to take over the services in the event of a disaster are called secondary servers.
A disaster in this context is any event that causes an interruption of service by the primary SQL
Server machine. This could be something as minor as a power cord that becomes unplugged,
as major as a hurricane that destroys the data center, or anything in between. Whatever event
occurs, its classified as a disaster for the primary SQL Server, and the HA solution chosen is
used to bring a secondary server online and allow clients to access the data on this server.
The event of moving the service from the primary server to a secondary server is called
a failover. This can be automatic or manual and doesnt necessarily imply a disaster has

Designing a SQL Server Solution for High Availability | 213

occurred. Often, a failover is forced to occur in some situations, such as when patches are
applied, to minimize the downtime of the database.
Some of the technologies provide for an automatic failover of the SQL Server service to
another machine in the event of a disaster occurring on the primary machine. Others require
a manual intervention, with an administrator performing an action to bring the secondary
database online. No matter which solution you choose, there will be a delay as the secondary
server comes online, during which the database will be inaccessible. This delay and its
frequency affects the amount of uptime youll be able to achieve.
Each HA technology has advantages and disadvantages. Table 10-1 lists a few of the characteristics of each technology. These characteristics will affect your choice of an HA solution in
your environment.
Table 10-1
High-availability comparison

T ECHNOLOGY

F AILOVER

S PECIAL H ARDWARE
R EQUIRED

HA S COPE

Clustering

Automatic/Manual

Yes

Server

Database mirroring

Automatic/Manual

No

Database

Log shipping

Manual

No

Database

Replication

Manual

No

Database

As shown in Table 10-1 some of the technologies support an automatic failover, which
implies a minimal delay during which the database is unavailable during a disaster. Others
require a manual intervention, which can involve substantial delays if administrators arent
readily available to complete the failover.
Only one technology requires special hardware: a clustering solution. More details are given in
the section on clustering regarding the implications of choosing this technology. This requirement can substantially affect your ability to choose this technology for budgetary reasons.
The last column in Table 10-1 shows the scope of each technology as related to its HA
capabilities. Clustering operates at the server level, which means that all databases, logins,
jobs, and so on are covered in the HA solution and will failover to the secondary server.
Notification Services and Reporting Services can be configured to run under a clustered
solution and failover along with SQL Server in the event of a disaster.
The other three technologies are designed to operate at the database level, which means that
server-level items, jobs, logins, endpoints, and so on must be synchronized on the secondary
server and then enabled on that server manually if appropriate. These technologies only
ensure that the database data itself is available in the event of a disaster.

TAKE NOTE

Some technologies, such as the Service Broker and Notification Services, are contained
completely within a database. These HA technologies dont failover automatically to the
secondary server. Manual intervention is required to ensure that these services continue to
function in the event of a disaster.
HA technologies can greatly assist you in providing a stable data environment to your applications and clients, but they arent without limitations. Those limitations, along with some
misconceptions, are discussed in the next section.

Recognizing High-Availability System Limitations


Each HA technology has specific limitations that will be discussed in individual sections.
This section will examine some of the general limitations of HA technologies along with
some of the problems that these technologies dont solve.

214 | Lesson 10

The primary goal of any HA database system is to ensure that the database is always available,
any time of day or night, no matter what happens to any particular server. Although this is
the goal, there will always be a minimal amount of downtime as services move from the primary server to the secondary server. This can range from seconds to minutes in an automatic
failover to (potentially) hours for manual failovers. In choosing an HA technology and justifying the choice to management, you should explicitly state the downtime potentials even if the
technologies function exactly as designed.
Data-loss prevention is a goal of any HA solution in addition to ensuring access to the data. This
is usually accomplished by keeping the server accessible, and also by preventing hardware or software failures from causing any information stored in your database to be lost. The various technologies do this to varying degrees, some allowing no loss at all and others allowing you to specify
how much data youre willing to lose. This is expressed in terms of time, because the synchronization of data from the primary to the secondary servers takes place at a user-determined interval.
The administrator usually balances this goal of preventing data loss against the performance
or monetary costs of configuring a particular HA solution. This is often a difficult point
to explain to a nontechnical person, particularly a person in a management position.
Management never wants to hear that data could be lost and assumes that high availability
guarantees no data will be lost. An HA solution can be configured this way, but implementing
an HA solution isnt an absolute guarantee that no data will be lost.
An HA solution provides for database services to be available on one of two or more machines
in the event of a disaster. It does not, however, provide additional performance potential or
load balancing across the multiple machines. In most cases, the secondary server machine isnt
providing any database services for the application being protected by the HA solution. The
machine could be performing other functions, including supporting other SQL Server 2005
instances or databases, but it isnt providing additional performance to the database or application covered by the HA solution.
There are a few exceptions with database mirroring and log shipping, but the possible performance gains may not continue in a failover situation.

TAKE NOTE

Often, nontechnical individuals think that a cluster of two machines implies that half of
the requests are serviced from each machine, thereby providing a performance gain. HA
solutions are strictly for availability increases, not performance increases for your databases.
In addition to not providing additional performance for the application, the HA solution
doesnt load-balance clients for the database services. At any particular time, one or more of
the secondary servers has resources that arent being used and that are available for use only in
the event of a disaster.

Understanding Clustering
THE BOTTOM LINE

Clustering is a technology that uses the Windows Cluster Services to provide multiple server
nodes each providing SQL Server services using a central shared database on shared disk
drives typically setup in a SAN.
Clustering technology is based on Microsoft Cluster Services (MSCS) and has been available
since Windows NT 4.0 and SQL Server 6.5. This is often the first choice for administrators
who desire a highly available database server.
As shown earlier in Table 10-1, clustering operates at the SQL Server instance level, meaning
that all the instance services are protected from a hardware failure. In the event of a disaster,
all databases, logins, jobs, and other server-level services move to the secondary server.

Designing a SQL Server Solution for High Availability | 215

A failover cluster works by having various resourcesin this case, including SQL Server
installed on the clusters nodes. A node is any Windows server participating in the cluster. At
any given time, only one node can own a particular resource and use it to provide services to
clients. In the event of a disaster, the service fails over to another node that activates its copy
of that service and begins responding to clients.
Clustering in SQL Server 2005 has been expanded from SQL Server 2000. SQL Server
Agent, Analysis Services, Notification Services, and replication are included in failover clusters
with SQL Server 2005; SQL Server 2000 only included failover of the database services.
Disk resources are shared among all nodes, eliminating the need to keep a separate copy of
any data for the resource synchronized on multiple nodes.
Abstraction for the client is provided by presenting a virtual instance of the servicein this
case, a SQL Server 2005 serviceto clients. Clients connect to this virtual instance rather
than to the actual instance running on the Windows server node. When the failover occurs,
this virtual instance moves to the secondary node, but its presentation on the network
remains the same so clients dont need to be reconfigured.
Clustering is also the most complex technology of those presented in Table 10-1. Clustering
imposes additional demands on the database administrator and equipment to provide this
level of HA capability.

Understanding Clustering Requirements


As mentioned, failover clustering is built on the MSCS offered by the Windows operating
system. Before you can implement a SQL Server 2005 cluster, you need to have a
Windows cluster built on the host operating system. The following are some requirements
to implement a cluster:

WSC-certified hardware. The hardware used for your cluster solution must be on the
Windows Server Catalog (WSC) as a cluster-certified system. Each server node is the same
type and size of system. Its important to choose hardware from the cluster section, because
not all WSC resources are certified for clusters. If your solution will include a Storage Area
Network (SAN) device, then make sure the total solution is included on the WSC.
Shared disk resources. A special shared disk subsystem must be set up to allow all cluster
nodes to connect to the same physical disks. This usually requires specialty hardware.
Geographic limitations. Because a shared disk subsystem is involved, there are limitations as to how far the clustered nodes can be from each other. This is due to the
requirements for low network and disk latency. Although this distance increases as network speeds increase, the limit can prevent your solution from continuing to function
in some disasters.
Additional network configuration. A cluster requires a network link between the
nodesrecommended to be a private network linkthat allows the nodes to exchange
a heartbeat. This lets each node ensure the others are still functioning. Additional hardware may be required on each node.
Additional costs. In addition to ensuring that the cluster hardware is on the WSC,
often you must purchase additional resources, memory, disks, CPUs, or whole servers to
provide HA capabilities with a cluster solution. This can substantially increase the cost
of implementing this HA technology over other choices.
Software licensing is also a consideration, because all nodes participating in the cluster must
have the same version of SQL Server and hardware. This can add substantially to the cost of
the solution if you must license per processor, especially if you use an active/active solution
(defined in the next section). At the time of this writing, passive nodes dont require their own
SQL Server 2005 license.

216 | Lesson 10

Designing a Clustering Solution


A failover cluster solution is relatively expensive with SQL Server 2005 because of the
hardware requirements. This option is listed first because all your design decisions will
likely be limited by the budget for your solution; carefully consider your financial limitations when going through the design.

The first part of your design involves determining the type of cluster scenario to implement.
With SQL Server, you must make two intertwined decisions. The first is the number of nodes
that will be a part of your cluster. SQL Server is limited by the underlying MSCS cluster and
OS limitations as well as SQL Server itself. With Windows Server 2003 or 2008 Datacenter
edition and SQL Server 2005 or 2008 Enterprise edition, eight node clusters or sixteen node
clusters, respectively, are possible. This means up to eight (or sixteen) Windows nodes can
be connected in a cluster, but because each Windows node can have multiple SQL Server
instances, you can actually cluster more than eight SQL Server instances. There are issues
with resource requirements, so in a practical configuration, its unlikely youd have more than
eight virtual SQL Server nodes present.
The standard edition of SQL Server 2005 or 2008 is limited to two nodes, and the
Workgroup edition doesnt support failover clustering. Windows 2000 supports only two
nodes unless you use the Datacenter edition, in which case four nodes are supported.
Related to the number of nodes is the configuration of each node. Any individual node can
be set to be an active node, meaning that its the primary server for a virtual SQL Server and
responds to client requests, or a passive node, meaning that its SQL Server service isnt actively
responding to requests and is awaiting failover from another node. These configurations are
referred to as active/active clusters or active/passive clusters.
This can be confusing, so consider a few examples. The simplest cluster is an active/passive
two-node cluster. In this configuration, shown in Figure 10-1, SQLProd01 is the primary
server and responds to client requests sent to SQLProd, the virtual instance. SQLProd02 is
the passive node, running idly and not responding to any client requests.
Figure 10-1
Two-node active/passive cluster

Client

SQLProd

SQLProd01

SQLProd02

Shared Disk

Designing a SQL Server Solution for High Availability | 217

If SQLProd01 fails for some reason, SQLProd02 will become the primary server after failover
and start responding to client requests. Only one servers resources are used at a time, meaning that half your server hardware (excluding disk drives) isnt being used at any given time.
In this case, only one SQL Server license is needed for the one virtual server.
A second example, illustrated in Figure 10-2, shows a three-node, active/active cluster with three
physical servers and three virtual servers. In this case, each server is actively used at all times to
do work, and three SQL Server licenses are required for the three active server instances.
The failover strategy is more complex in this example, with each server having a designated
failover server in a round-robin fashion. Table 10-2 shows the virtual servers, primary physical
instance, and the failover physical instance.
Figure 10-2
Three-node active/active
clustering

Client

SQLProdA
SQLProd01

SQLProd02

Shared Disk

SQLProdC

SQLProdB

Client

Client
SQLProd03

Table 10-2
Three-node failover

V IRTUAL S ERVER

P RIMARY S ERVER

S ECONDARY S ERVER

SQLProdA

SQLProd01

SQLProd02

SQLProdB

SQLProd02

SQLProd03

SQLProdC

SQLProd03

SQLProd01

If any node fails, then the virtual server moves to another instance. However, when this
occurs, one physical server will be spreading its resources to serve two virtual instances. In
this example, if SQLProd02 fails, then SQLProd03 must serve clients connecting to both
SQLProdC and SQLProdB.
In order for the applications to function at a similar performance level, each server must have
enough spare processor cycles and memory to handle the additional load of a second instance
in the event of a failover.

218 | Lesson 10

The last example, shown in Figure 10-3, has a four-node cluster with three virtual nodes. In
this configuration, the cluster is set up in an N 1 configuration with three active nodes. One
passive node acts as the failover node for any of the three active nodes. This type of cluster
requires three licenses for software; in addition, the passive node must have enough hardware
resources to handle the load for any one of the other three nodes.
Figure 10-3
Four-node cluster in an N 1
configuration

Client

SQLProdA

SQLProd01

SQLProd02
Shared Disk

SQLProdC
Client

SQLProdB

SQLProd03

SQLProd04

Client

In all of these examples, the cluster solution should be designed with a specific performance
goal in mind. Because the secondary node in any of these cluster examples will receive an
increased load in the event of a failover, its hardware should be designed to handle the desired
level of performance. If the same level of performance is desired, as it often is, the secondary
server should have the same hardware configuration as the primary.
If the cluster is in an active/active configuration, then each server should have enough hardware to handle its own load as well as the additional load from the node that would fail to
it. When the same performance is expected from the secondary server, a level of hardware
equivalent to that on the primary server will sit idle until a disaster event occurs. This idle
hardware is essentially an insurance cost that must be weighed against the cost of downtime
if the SQL Server instance fails.

Clustering Enhancements
SQL Server 2008 includes a set of enhancements for improved clustering. These
enhancements generally rely on enhancements included with Windows Server 2008. The
SQL Server enhancements include the following features:

Cluster Validation Tool. This is a tool provided with Windows Server 2008. SQL Server
2008 requires a successful result from this tool in order for clustering to proceed.

Designing a SQL Server Solution for High Availability | 219

Improved installation and set up of cluster nodes.


Expanded maximum cluster nodes (OS dependent).
Support for various other Windows Server 2008 OS clustering enhancements.
Rolling Upgrades and Patches. SQL Server instances on a cluster can now be upgraded
one node at a time. This reduces the amount of downtime needed for upgrades because
only the database portion of an upgrade requires that the entire cluster be unavailable
for client access.

Considering Geographic Design


A clustering solution is generally contained within a single data-center facility, but advances
in fiber channel and iSCSI technologies make it possible to geographically disperse such
a solution to multiple sites. Doing so usually increases the cost of the clustered solution
greatly, but it provides for fault tolerance beyond a single site. Keep in mind, however, that
any clustered solution depends on a shared disk array, which is a single point of failure
for the cluster. Network latencies must be tightly controlled with any cluster, but especially with a geographically dispersed cluster. Make sure your budget allows for the proper
equipment to ensure high performance between the nodes and the disk subsystem.
If you choose to disperse your cluster across multiple sites, work closely with your hardware
vendors to ensure that the hardware chosen and the network design meet the requirements.
There is a separate cluster section for geographically dispersed clusters.
The disk subsystem is an important part of any clustered solution. The disks are shared,
although access is arbitrated to ensure that only one node controls any particular disk mount
point at a time. This disk subsystem can be a single point of failure and should be designed to
be highly available itself. HA disk subsystems are discussed later in this Lesson.

Making Hardware Decisions


The hardware choices for your cluster come from the WSC list, but you should carefully
consider expandability in your decisions. Because youre essentially requiring the purchase
of two or more matched servers, if you outgrow your hardware the need to upgrade will
require purchasing two or more new solutions instead of just one. Its recommended that
the hardware you purchase have the capability to add more memory or CPUs later if
required. If you determine that you need four CPUs per node, you may wish to purchase
eight CPU servers with just four CPUs installed. That way, you can add four CPUs later
to each node if necessary.

When designing your cluster hardware, keep in mind that you need to design for the performance goal of each node when another node has failed. This usually means that the hardware
chosen for processor and memory needs should be able to handle the load of the primary and
secondary virtual servers that may potentially be running together.
For example, looking again at the three-node cluster example shown previously in Figure 10-2,
assume that each virtual SQL Server instance requires two CPUs and 4 GB of RAM to
meet its performance goals. The failover design requires that each node have four CPUs and
8 GB of RAM. If SQLProd01 fails, then SQLProd02 will be running both SQLProdA and
SQLProdB. To meet the performance goals, four CPUs must be dedicated to each instance,
for a total of eight CPUs. The RAM must be similarly configured.
Because your servers may not have equal CPU and memory requirements between the
multiple applications, you should add the needs of the instances that will run together and
purchase the amount of resources required for that node.

220 | Lesson 10

TAKE NOTE

If your requirements dictate an odd number of processors or memory that doesnt fit into
your hardware choices, its better to round both up to the next number of processors or
RAM. If you determine that a server needs three CPUs to meet performance goals, purchase four CPUs rather than two. This will be a minor additional expense and some hardware choices may require an even number of processors anyway.
Your hardware design also needs to specify how the hardware will be configured with the
different instances that are running. If you have a passive node that will support only one
instance in a failover situation, then you can dedicate all the resources to this instance.
However, if youre running active/active clusters, you should specifically dedicate an amount
of memory to each instance. Doing so prevents problems when the second instance starts up
during a failover event and the two instances compete for RAM. You should also set an affinity
for CPUs between the instances to ensure that enough processor resources are set aside in case
of a failover event.

Addressing Licensing Costs

CERTIFICATION READY?
Understand how
clustering differs from
other technologies
such as mirroring or
replication.

The last part of designing a clustered solution is being aware that the versions of Windows
and SQL Server must be the same on all nodes. You cant mix editions or 32-bit and 64bit versions in a cluster. This can have implications for the combinations of different SQL
Servers onto a clustered solution. If you have applications that only require 32-bit SQL
Server 2005 Standard Edition and others that require 64-bit Enterprise Edition, then
combining them on a cluster may mean spending more money on hardware and licensing
for the Standard Edition applications than is justifiable. You should perform a careful ROI
analysis when combining different applications to be sure the cost is worth the benefits.

Understanding Database Mirroring


THE BOTTOM LINE

Database mirroring is a technology in SQL Server that uses two copies of the database
and provides for automatic failover in the event that one database experiences a disaster
event.
Database mirroring technology was designed to provide a very high level of database availability using lower-cost hardware than clustering. There are a number of differences between
clustering and database mirroring, and the best choice for your environment depends on the
particular needs of your organization.

TAKE NOTE

Database mirroring wasnt supported in the initial Release To Manufacturing (RTM)


version of SQL Server 2005 distributed in November 2005. However, with the release of
Service Pack 1 and subsequent Service Packs for SQL Server 2005, database mirroring is
a fully supported technology.
Some of the differences between clustering and database mirroring are as follows:
Hardware. Clustering requires matching and supported hardware from the cluster section of the WSC, including a shared disk subsystem. Database mirroring can work with
any hardware supported by Windows 2003; in addition, the two servers can use completely different hardware, resulting in a much lower hardware cost.
Disk failure. Clustering doesnt protect against a disk failure because the database used by
both the principal and secondary nodes resides on the same disks. Database mirroring protects against disk failure by having a copy of the database on each servers separate disks.

Designing a SQL Server Solution for High Availability | 221

Failover delay. Clusters fail over in 30 seconds to a few minutes, depending on the time
required to start the secondary instance and fail over the resources. Database mirroring can fail to the mirror database in a few seconds. Both technologies allow manual or
automatic failover.
Scope. Clustering operates at the server level, including SQL Server Agent, Notification
Services, other services, and all databases. Database mirroring only works at the database
level and requires that logins and any server-level resources that are required be synchronized across both servers.

TAKE NOTE

The master, model, and


msdb databases cant be
protected by mirroring.

From this list, it may appear that database mirroring addresses most of the shortcomings of
clustering, fails over more quickly, and should be used everywhere clustering was previously
used. Although database mirroring does provide many benefits, it isnt always the best choice.
The limited scope of database mirroring to a single database means that more administrative
work is required to ensure that the application will continue to function correctly in the event
of a failover.
Database mirroring is a robust technology that is usually easier to set up and administer
than clustering, at a much lower cost. With its ability to provide for limited reporting using
database snapshots, fast failover times, and zero-data-loss protection, its a great alternative for
many organizations user databases.
This technology works by applying all log recordsessentially, every change that occurs
from a principal database to a secondary database. The exact timing of this application
depends to some extent on how the database mirroring is configured. The application ensures
that all changes made to the principal database are reflected on the mirrored copy.
The next section will examine the configuration of a database-mirroring environment.

Designing Server Roles for Database Mirroring


A typical database mirroring setup includes either two or three servers, each providing
one of the three roles involved in database mirroring. The use of a third server is optional
to implement database mirroring but required in certain circumstances to enable automatic failover.
The principal database is the live database being protected with database mirroring. Its role is
referred to as principal, whether noting the actual database or the server instance on which its
running. All changes made to the data from users or client applications occur on this database.
The secondary database, which receives the changes in the form of log records and has them
applied, is the mirror database. This role is the partner of the principal database and exists
perpetually in a loading state as log records from the principal are applied to this database.
The third role is that of the witness. This is an optional server used in some circumstances to
initiate the automatic failover within seconds of the principal server failing. Any version of
SQL Server, from Express to Enterprise, can act as a witness in database mirroring.
The witness works with the principal or mirror to form a quorum of servers. A quorum
of two servers determines which database is the principal server. In a normal situation, the
principal database and the witness form a quorum that keeps this database functioning as the
primary database. If communication with the principal fails, the mirror and the witness can
form a quorum to switch the mirror databases role to that of the new principal. If the mirror
database cant communicate with the principal, the witness and principal can still form a
quorum to prevent failover. A single server instance can function as a witness for multiple
database-mirroring sessions.
When a failover event occurs, whether automatic or manual, the principal and mirror switch
roles.

222 | Lesson 10

Understanding Protection Levels


Database mirroring can operate in one of three different modes, each offering a different
level of protection for the principal database. Each is described next along with the situations in which you may choose to employ that particular level.

UNDERSTANDING HIGH-PERFORMANCE MODE


The level that offers the least data protection but the best performance is high-performance mode.
In this mode, log records are sent from the principal to the mirror, but the principal doesnt wait
for confirmation that the mirror has written those log records to disk before moving on to other
transactions. The two servers operate asynchronously, which allows for the best performance of the
principal database but may potentially result in some data loss if a failover is forced.
In this mode, automatic failover isnt allowed, and an administrator must manually force the
switch of roles with a forced service failover. This causes an immediate recovery of the mirror
database, which can involve data loss if not all the transaction log records have been received
by the mirror database. A witness isnt recommended for this mode, but if one is configured,
then its required to maintain a quorum. If a witness is present and the mirror goes down, then
the principal database must maintain a connection to the witness or it will take itself offline.
This mode is most useful in an environment where you can tolerate some data loss, but you
cant tolerate the delays for all log records to be acknowledged. This may be the case if the
two servers are separated by large distances or many hops. You can choose this mode for
applications that stream data, such as price quotes, if the nature of the data is volatile but not
necessarily critical if some of it is lost in a disaster.

UNDERSTANDING HIGH-PROTECTION MODE


The intermediate level of data protection is called high-protection mode. In this mode, there
is no witness server, but the principal and mirror databases function synchronously. When a
log record is sent from the principal to the mirror, the mirror sends back to the principal an
acknowledgement that the log record has been written to disk. Once this has occurred, both
databases can then update the data with the change.
In this mode, only manual failover is supported.
This is a good mode to use if you dont have a witness server or if you want to manually initiate a failover in the event its necessary. Some applications may require configuration changes
to move to a new server, so an automatic failover of the database doesnt allow the application
to keep running. This mode also may be desired if you want to ensure that two servers keep
their data synchronized, but not necessarily for disaster-recovery purposes. You may use the
mirror server for reporting or some other purpose and not require the automated failover. If
the mirror server goes down, the principal continues to operate unaffected although the data
isnt mirrored any longer. Once the mirror comes back up, the transactions must be applied to
the mirror before its synchronized.

UNDERSTANDING HIGH-AVAILABILITY MODE


The third mode of operation for database mirroring is high-availability mode, which is similar
to high-protection mode but requires a witness server to form a quorum and determine the
principal server. This mode also operates synchronously, with the mirror database acknowledging
all log records from the principal database. Because all log records transferred are acknowledged
before theyre written to the database, the two databases remain synchronized at all times.
In this mode, a quorum is used to determine which server is the new principal if a server fails.
Either the old principal and the witness or the old principal and the mirror can form a quorum and maintain the status quo of the principal database. If the old principal is unreachable,
however, the mirror and witness can form a quorum and switch roles to make the mirror
database the new principal database. If the mirror server goes down, the principal and witness
continue to operate.

Designing a SQL Server Solution for High Availability | 223

This mode is most appropriate for situations requiring automatic failover and zero data loss.
In conjunction with new ADO.NET 2.0 or SQL Native Client features, clients can automatically redirect to the mirror server when a failover occurs.

Designing a Database-Mirroring Solution


Just as in clustering, the performance requirements will affect the type of database mirroring setup you choose. If you can afford the hardware to meet your performance goals while
running in high-availability mode, this is the best choice for an HA system. If you have
hardware or network limitations, then you may opt for high-performance mode instead.
Your design should consider which databases need to be protected and then choose a separate
SQL Server instance to handle the mirror role. Its possible to mirror to another instance of
SQL Server on the same Windows host as the principal database, but this approach provides
availability only in the event that the principal instance of SQL Server is unavailable. To
achieve a higher level of performance, you should specify another instance of SQL Server on
a physically separate Windows host.
In building your HA solution using database mirroring, distance isnt a limiting factor.
Providing you have the network bandwidth, the principal, and mirror, databases can reside on
opposite sides of the earth.

TAKE NOTE

Because database mirroring operates at the database level, you must set up a separate mirroring session for each database on an instance that you wish to protect. These separate
databases dont all have to mirror to the same mirror server. You can use different physical
servers for each mirror database.

Your client applications, however, may dictate the feasibility of using database mirroring.
If youre using Open Database Connectivity (ODBC), Object Linking and Embedding
Database (OLEDB), or an older database connectivity technology, then youll need to code
custom connection logic or set up some sort of load-balancing solution to allow clients to
redirect to the principal server. Using a load-balancing appliance and DNS names for connectivity can seamlessly allow clients to connect to the proper server automatically.
If you dont have this type of solution, you may have to manually alter the connection strings
for your client application in the event of a failover. Although the database may be available almost instantly after a failover, your clients wont realize this until their application can
reconnect. The ability to redirect to the failover server is critical in designing your databasemirroring environment.
If youre using the SQL Native Client or ADO.NET 2.0 technologies, the connection strings
for connectivity can contain a primary and mirror server. This lets the client seamlessly find
the appropriate server.
In designing the mirror solution for your environment, make sure to account for the fact that
mirroring protects at a database level, not a server level. This means that as you add logins,
they should be added manually on the mirror database as well. Any server-level jobs that you
have running must be set up on the mirror database as well.
One last consideration is that the mirror database isnt accessible or available to clients. It sits
idle, accepting transactions until it switches roles and becomes the principal in a disaster. One
way around this is to configure a snapshot based on the mirror database. Doing so gives you
a point-in-time view of the data. However, this snapshot must be continually dropped and
rebuilt to see the data changes occurring in the mirror database.

224 | Lesson 10

Configuring a Database-Mirroring Solution


Once youve designated a particular database as the principal and determined which server
will host the mirror database, you can begin to configure your database-mirroring environment. You should have determined that the mirror database has sufficient resources to host
a copy of the principal database as well as to handle the client load in the event of a failover.

TAKE NOTE

The endpoint must be


created on both the
principal and mirror
service with matching
ports.

TAKE NOTE

The restores need to use


the WITH RECOVERY
option on the mirror
server.

LAB EXERCISE

Perform Exercise 10.1 in your


lab manual.

The first step in enabling database mirroring is to ensure that the security for the database
mirroring session is set up. You have the choice to use either Windows authentication or certificates for the log-record transfer. This choice depends on your situation. If the servers are in
the same domain or in a trusted domain, then you can use Windows authentication. If youre
coming from an untrusted domain, you can use certificate authentication. In either case, a
login must be set up on the mirror server to allow the transfers.
The log-record transfers take place through the use of a special endpoint called a databasemirroring endpoint. This endpoint must be established on each server, principal, and mirror, and
the network configured to allow traffic on the specific port chosen for communications. Because
multiple databases can be mirrored from a single instance, you should specify a different port for
each database. These endpoints are created using the CREATE ENDPOINT command.
Once you have the communication channel set up, you must initialize the mirror database.
You do so by taking a full backup on the principal server and restoring it on the mirror server
using the same database name as the principal. All log backups taken since the full backup on
the principal must also be restored on the mirror server. This ensures that a full copy of the
principal databases data is on the mirror database.
At this point, both servers are configured for mirroring, and the session can be enabled.
Starting with the mirror server and then on the principal server, run the ALTER DATABASE
command with the SET PARTNER option to specify the opposite server and designated
TCP port for mirroring. Doing so enables database mirroring and begins the transfer of log
records from the principal to the mirror.
In Exercise 10.1, youll set up mirroring on the AdventureWorks database. This exercise
assumes that the database server was installed as the first named instance on the C: drive. If
youve installed your server in a different place, modify the paths to match your system. Be
sure the recovery model for your AdventureWorks database is set to full. In addition, a second
instance of SQL Server 2005 is required. It can be on the same server or a different server.
This exercise has SSC10\SS2K5 as the primary instance and SSC10\Sales as the secondary
instance. Again, adjust for your circumstances.

Testing Database Mirroring


Once youve enabled a database-mirroring setup, its important to test the setup to ensure
that mirroring is working and that your failover database can pick up the load. This also
ensures that your client applications can connect to the failover server.

You should test for two types of failover events: planned failovers, such as for maintenance
activities; and unplanned failovers, which are any failovers that havent been scheduled and
communicated to the appropriate people.
For planned failoverstypically, maintenance activities such as hardware or software
upgradesyou can develop a testing strategy using the manual failover commands. This
entails running the ALTER DATABASE command with the SET PARTNER FAILOVER
option on the mirror server, which forces a failover from the principal database to the mirror
database. Because this command will be run during a planned failover, you can ensure that all
clients connect to the mirror server, that all data changes have been synchronized on the mirror database, and that all logins are available. Your testing strategy should ensure that a new

Designing a SQL Server Solution for High Availability | 225

TAKE NOTE

If this test is conducted


on a production server,
make sure all affected
clients are aware that the
test is coming.

login is added on the primary as well as some particular piece of data changed prior to the
failover. If your configuration and procedures are correctly set up, youll be able to test that
those changes have been copied to the mirror server.
Unplanned events are slightly harder to test, but they can still be simulated. As with planned
events, you should explicitly create marked data, logins, and possibly other server-level items
on the principal. You can simulate an unplanned failover by pulling the network cable out of
the principal server. Doing so simulates a hardware or software abend (abnormal end) on the
principal server as well as a network failure, any of which could cause the failover.
If your servers are configured for automatic failover, you can check whether the marked data, logins, or other specific events have been copied to the mirror server correctly. Because some of these
objects require manual synchronization by a database administrator, you should have procedures
in place to handle the case where the objects havent yet been moved to the mirror server.

TAKE NOTE

Because the principal database may not be available in a real disaster situation, you cant
refer to that SQL Server instance for the details of the object. You should make a paper
record or offline notation of the objects as part of the procedure for creating them on the
principal database.

In either of these test cases, make sure you test connectivity to the mirror database from all
the locations that require connectivity. This is especially important if you have geographically
dispersed mirrored servers. Your SQL Servers may failover quickly, but if clients cant access
the remote SQL Server, then the application wont be seen as available.

Mirroring Enhancements
SQL Server 2008 includes a series of changes designed to improve mirroring performance. Prior to SQL Server 2008, mirroring was a one-way activity in that the principal
server sent data to the mirror server. Now with SQL Server 2008 and the feature for
automatic page repair each server can attempt to recover page data from the other server
participating in the mirror. If page repair is successful, all of the data is preserved. This
is because the second server in the mirror should have a perfectly good copy of the data
with which to perform the repair. In contrast, correcting errors by using the DBCC
REPAIR_ALLOW_DATA_LOSS option might require that some pages, and therefore
data, be deleted. Note however that if a corrupted page has been caused by some form of
drive hardware failure, recovery may not be possible and immediate attention should be
given to the situation.

Other enhancements to mirroring with SQL Server 2008 include:

CERTIFICATION READY?
Be prepared to answer
questions involving
a witness server.
Is a witness server
required for automatic
failover? What are the
hardware and software
requirements for a
witness server?

Compression of mirroring data. The log data being transmitted from server to server
is now compressed. The result should be less latency between a change on the primary
server and the corresponding change on the mirror server.
Write-ahead log. Writing log data to disk before all data has arrived on the mirror server
also improves the speed of completing transactions.
Improved efficiency of log send buffers. SQL Server 2005 reserves an entire log send
buffer for any log flush operation. SQL Server 2008 now appends log records to the current buffer if enough space is available.
Read-ahead during undo. During a planned mirror failover, the new mirror server (the
former principal server) must undo all transactions that are not completed on the new
principal server (the former mirror server). Page read-ahead improves the efficiency of
this operation.

226 | Lesson 10

Understanding Log Shipping


THE BOTTOM LINE

CERTIFICATION READY?
Logs are usually moved
to the standby server on
a scheduleperhaps
every 15 minutes. If the
file transfer takes 20
minutes to complete, log
shipping may not be a
suitable option. Watch
for these details in the
certification tests lengthy
scenario.

Log shipping is a technology for high availability that is based on the normal log-file backup
and restore procedures that exist with SQL Server.
In a log-shipping environment, transaction-log backups are made on the primary server and
then copied to the secondary server, where theyre restored. Prior to SQL Server 2005, the
Enterprise edition of SQL Server was required for this process to be automated, but many
people developed their own scripts to simulate log shipping with the Standard edition.
In the event of a disaster situation, the final transaction logs are restored on the secondary
server, and then the status of that server is changed from a loading database to an active one.
These final steps must be performed manually or with custom scripts. SQL Server provides
no automatic way to do this.
Because log shipping uses regular file-transfer methods between servers, the log backups can
be copied to multiple servers, allowing multiple servers to be used for redundancy. This is an
advantage over clustering or database mirroring, although you can combine database mirroring with clustering to copy the log records to other servers from the mirror server.
Log shipping also has another advantage over database mirroring and clustering: You can use
the secondary database for reporting and other read-only queries. If the secondary is a separate server, then the HA resources are put to use instead of standing by idly.
The disadvantage of using log shipping is that the application and server roles dont fail over
automatically. An administrator must manually bring the secondary database online, and you
must develop a method for ensuring that the application will use the secondary server. Because
manual intervention is required, the delay between when a disaster event occurs and when the
secondary server comes online will be greater than either clustering or database mirroring.
Another issue with this technology and failover is that the names of the servers on the network must be different to comply with the Windows networking requirements. You must
develop a method to ensure that the clients can find and connect to the secondary server.
As with database mirroring, this is a database-level protection mechanism. Any server-level
logins, jobs, or other objects must be manually kept in synchronization by an administrator
on both servers.

Choosing Log-Shipping Roles

TAKE NOTE

In planning for more


than one possible failure, or even for the
reporting load of multiple applications, its
likely that you will have
even more powerful
hardware on the secondary server than on the
primary servers.

A log-shipping configuration includes three possible roles: the principal server, the secondary
server, and the monitor server. As with the other technologies, the primary server is the
production server that clients normally connect to for queries. The secondary server is
the server to which the database fails over if a disaster event occurs. The monitor server,
which is optional, should be a separate server that stores tracking information about the
backups and restores.
Similar to database mirroring, the hardware required is the regular hardware required for SQL
Server. The principal and secondary servers dont need to be the same or even similar hardware.
Similar to clustering, you can have multiple principal databases, from separate instances, all
configured to fail over to a single SQL Server instance. This is a similar configuration to the
N+1 configuration used in clustering. This is a common configuration; because its unlikely
that more than one principal server will fail at the same time, so resources are conserved in
this situation.

Designing a SQL Server Solution for High Availability | 227

The processor and RAM requirements for your secondary servers should be based on the
performance goals that must be met and baselines from your existing servers.

Switching Log-Shipping Roles


When there is a need to fail over to the secondary node, an administrator must perform
the process of bringing the secondary node online as a read-write database. If this database is configured for read-only access, that will continue to work; however, any connections will be dropped when the final restores take place.
The steps for bringing the secondary server online are as follows:
1. Restore all remaining transaction-log backups from the principal server on the secondary
with the NORECOVERY or STANDBY option.
2. If the principal server is still accessible, back up the tail of the transaction log with the
NORECOVERY option.
3. Restore the tail of the transaction log, if available, on the secondary server.
4. Bring the database online by changing the state using the WITH RECOVERY option
of the RESTORE DATABASE command.
At this point, the secondary database is ready to begin handling read/write traffic. If connection changes must be made for any clients, they can occur now.
If you have multiple secondary servers, perform all these steps on all servers, with the exception of bringing the database online. Only the new primary server should be brought out of
the NORECOVERY or STANDBY state.
This secondary server, however, is still configured to be the secondary server. If this server will
begin responding to client requests and updating data, then its role should be changed from
secondary to principal. To switch roles, do the following:

CERTIFICATION READY?
Remember that log
shipping requires manual
intervention in order to
switch roles between
servers.

1. Disable the backup job on the principal server.


2. Disable the copy and restore jobs on the secondary server.
3. Enable log shipping on the secondary server by using the wizard in Management
Studio or by manually executing the stored procedures:
a. When choosing the secondary database, enter the name of the old primary
database server.
b. Select the option No, the Secondary Database Is Initialized below the name.
Once youve completed these steps, the roles have been reversed. You can perform these steps
as often as needed, although after the first time, you wont need to configure log shipping
again on the secondaryjust enable the proper jobs.

Reconnecting Client Applications


When you fail over to a secondary log-shipping server, no mechanism is built into this
technology to enable clients to automatically fail over their connections. In clustering, the
virtual IP and server name remain the same. Database mirroring has failover connections
built into ADO.NET 2.0 and the SQL Native Client technologies. With log shipping,
however, the server name of the secondary server is different than the primary server, as is
its IP addresses. You must develop a method of reconnecting your clients to the secondary server so they can continue to access and update the database.
You can achieve this connection change three ways: manually update connection strings,
rename the secondary server, or use network abstraction techniques to direct client
connections.

228 | Lesson 10

The first of these is the most straightforward. In the connection string used by the clients,
whether this is ADO.NET, ActiveX Data Objects (ADO), OLEDB, or another mechanism,
change the name of the server to that of the secondary server. Depending on how centralized your connection strings storage is, this may or may not work well. If youre supporting a
single web server with the connection string stored in a global variable, this is easy to deploy
because only one file is changed. Similarly, if your clients read the connection string from a
central location, then you can easily deploy this to a large number of clients. If the string is
coded into the registry on every client machine, this may not be the best choice for your environment. Your decision to use this method will largely depend on how your application and
its connection strategy are architected.
The second choice is also fairly straightforward, but its a little more tedious. In this scenario,
you rename the Windows host of the secondary server to the name of the primary server.
Doing so requires that the primary server has already been renamed to something else or that
its offline. You may or may not elect to also change the network addressing, but that again
depends on your application. You must also rename the SQL Server 2005 instance to match
the Windows host, to ensure that clients can reconnect.
As an example, if your primary server is named SQLProd01 and your secondary server is
named SQLProd02, you take the primary (SQLProd01) offline or rename it to SQLProd03
(or some other unique name). Then, rename the secondary (SQLProd02) to SQLProd01
and also rename the secondary SQL Server instance on SQLProd02 to SQLProd01. If the
old primary is repaired and ready to come back online, you need to rename the secondary
(the original SQLProd02) back to SQLProd02 or another name before bringing the original
SQLProd01 back on the network with that name. Note that renaming also involves name
changes in any Active Directory domain as well as the name entries in your DNS server.
This is confusing, and if the failover isnt permanent, you may not wish to choose this strategy.
Depending on the network configuration of your Active Directory domain controllers and
DNS servers as well as the lifetimes of cached DNS client entries, there may be a substantial
delay while the clients update their cached name lookup entries and the naming converges
onto the IP address of the renamed secondary server.
The final method is preferred by most companies and involves using your network
infrastructure to abstract the SQL Server address from the actual machine. You can
choose to use DNS or a load-balancing scheme to route traffic at the network level to the
appropriate server. For example, if you have all clients connect to a hostname in DNS such as
sql.sqlservercentral.com, instead of SQLProd01, then if SQLProd01 fails, you can change the
DNS entry for sql.sqlservercentral.com to resolve to SQLProd02.
This way, none of the clients must change, and the Windows names used by the clients
remain the same. The convergence of the name to the new address may still take some time as
clients flush their DNS caches.
A load-balancing device, either hardware or software, can be even simpler to use. If you have
all clients address the load-balancing device, it can instantly direct clients to the new server.
This is the preferred method if your network infrastructure supports it.

Understanding Replication
THE BOTTOM LINE

Replication is a term for multiple different types of processes that copy transaction data from
one or more database servers to one or more other servers.
The replication technology available in SQL Server isnt specifically developed for high availability. Instead, replication is designed to enable data to be moved from one or more servers
to another in a publisher-subscriber model. It can be adapted for high availability because it
automates the movement of data to remote servers.

Designing a SQL Server Solution for High Availability | 229

Three types of replication are available in SQL Server: snapshot replication, transactional replication, and merge replication. Because snapshot replication operates on the entire set of data
at one time, this type of replication isnt suitable for an HA solution.
Transactional or merge replication, however, can operate at the transaction level. As quickly as
the transactions can be copied to the distributor and sent to the subscriber, theyre executed on
the secondary server, making both of these solutions good for implementing an HA solution.

Implementing High Availability with Transactional Replication


Transactional replication is designed to move data on a batch basis. It can be configured
to send batches of one, potentially keeping the data on the secondary server more up to
date than it might be with log shipping. Log shipping moves the transaction log containing
all transactions over a configured time period. For an HA system, the log is usually moved
every one to five minutes. The log being moved could contain dozens of transactions.
Transactional replication, however, can move data to the secondary server one transaction at a
time, resulting in very low latency for the changes being applied to the secondary server.
In building an HA system based on transactional replication, one of the advantages is that the
secondary system is a fully live system that is available for queries and even updates. If you
can separate the updates between two systems so there are no conflicts, you can implement
a bidirectional transactional replication scheme that sends updates from the primary to the
secondary and the secondary to the primary.
As with database mirroring and log shipping, you can use disparate hardware for the primary
and secondary servers. There are no restrictions beyond the fact that hardware must be on the
WSC. However, you should appropriately size the hardware for the load that will be placed
on the servers as well as the performance goals required.
A special parameter, @loopback_detection, is used with the subscription stored procedure to
prevent changes from being sent back to the originating node. You can have clients connect
to either the primary or secondary node, resulting in load-balanced performance, improved
capacity for transaction throughput, or geographically aware clients that connect to the closest
node. In any of these cases, the hardware requirements to meet a specific performance goal
can be reduced on each node because the full client load is never attached to a single server.
In the case of a disaster event, however, the surviving node must respond to all client requests,
resulting in much lower performance.
One of the downsides of replication is that it works on an article-by-article basis, where an
article is a set of data encompassing part of a table, a whole table, or a join of data between
tables. This results in an administrative effort for configuration that is directly proportional to
the number of tables in your database for an HA system. Each table must be added as a publication; this isnt difficult when youre setting up replication, but any changes to the schema
results in the need to add another publication. If this constant administrative requirement
cant be performed, then some of your data may not be available if a disaster occurs.

Case Study: Handling Conflicts


If the possibility exists that the same data will be updated on separate nodes, then you
should consider merge replication instead of bidirectional transactional replication. Merge
replication is specifically designed to handle conflicts in updates on separate nodes.
With transaction replication, the last update wont necessarily be the update that is made
on both nodes. For example, suppose NodeA receives an update to a row, and it takes
one minute for this update to be moved to NodeB. Twenty seconds after the update on
NodeA, the same row is updated to a different value on NodeB. Because of the time
delays, 40 seconds after NodeB is updated, the update is overwritten by the value from

230 | Lesson 10

NodeA. Twenty seconds later, NodeAs value is overwritten by the replicated value from
NodeB. In this case, youll have different data on each system.
If youre allowing this to occur on your systems, and theyre intended to be used for high
availability, you should ensure that code checks are run on a regular basis to look for this
disparate data. The way in which you deal with the data will be specific to your business
requirements. You must decide how the potential issues with data-update conflicts can be
handled. Youll need to perform manual updates to the nodes with the incorrect data based
on what you decide for your business.
In merge replication, you can specify whether the first update, the last update, or custom resolution code is used to determine which value is written to all nodes. In any case,
where data can be updated on multiple nodes, you specify how conflicts will be resolved.
However, as with bidirectional transactional replication, the business rules for deciding
how conflicts are resolved will be specific to your business requirements.

Implementing High Availability with Merge Replication


Merge replication is similar to bidirectional replication in that changes made on either
the primary or secondary server are moved to the opposite server. It can be set to function
in a manner similar to transactional replication, operating on a transaction-by-transaction
basis. The HA features of merge replication are similar to those of transactional replication,
with the same hardware and scale requirements.
One of the advantages of merge replication over transactional replication is that updates can
easily be made to both the primary and secondary nodes. Any conflicting changes on various nodes can be resolved in a variety of manners by the SQL Server replication agents. This
can provide additional scalability as well as availability by allowing a portion of clients to be
served by the primary node and a portion to be served by a secondary.
CERTIFICATION READY?
Know the difference
between a push
subscription and a pull
subscription and when to
use each type.

Unlike the other HA technologies, merge replication lets you split client connections among both
nodes. A load-balancing technology used to direct clients to both nodes can immediately send
clients to the surviving node in the event that the other node fails. This can provide a seamless
transaction between nodes in addition to balancing the load across multiple nodes for scalability.
If you choose to share the load with multiple servers, make sure youre aware of the performance
reduction that will occur if one node fails. If a reduced performance capability is acceptable in
the event of a disaster, then this can be a good technology for a highly available system.

Designing Highly Available Storage


One of the most important aspects of any HA solution is ensuring that your application
and the database services continue to function in the event of a disaster. All the technologies
discussed are designed to ensure that this happens. However, the disk subsystem is particularly important because your data is stored on it; the disk subsystem must be protected
differently than the server instance.
Disk drives are mechanical devices with moving parts, unlike all the other critical parts of a
database server, which are electronic. Its far more likely that a disk drive, with its spinning
platters and moving heads, will fail than any other part of your database server. The disk drive
is also where the data of record, meaning the authoritative source, is stored. As changes are
made, they arent considered permanent until the log record is written on disk; and changes
arent necessarily recoverable until the data record is stored on a disk.
No matter which technology you choose to build an HA solution withor which combination of the four technologies previously discussedyou must be sure your storage solution
is well protected from any disaster. Clustering, the solution chosen most often before SQL

Designing a SQL Server Solution for High Availability | 231

TAKE NOTE

No matter which type


of disk subsystem you
choose, it is not a
replacement for a tapebackup system that provides archived records
of your data as well as
off-site storage.

Server 2005, requires even more protection, because only one set of disks is shared between
the nodes on which the data is stored. The other three technologies have separate disk subsystems for the primary and secondary nodes, providing some degree of fault tolerance.
In designing a highly available storage solution, the method used for disk drives is the
Redundant Array of Inexpensive Disks (RAID). This technology is available in many forms
and possible combinations, each of which has different advantages and disadvantages.
Storage Area Network (SAN) technology is another way of building on the benefits of
RAID arrays to provide even more fault tolerance and better performance. Both of these are
discussed next.

UNDERSTANDING RAID ARRAYS


RAID technology dates back to 1988, when it was introduced in a paper by David Patterson,
Garth Gibson, and Randy Katz. This paper described using a series of inexpensive disk drives
to achieve greater reliability and performance than a single disk drive using a group of drives.
The original paper defined five levels of RAID; over the years, more have been added by other
groups seeking to improve on the concept.
The basic idea of RAID arrays is that multiple disks are used to store data along with a parity calculation based on the data. Thus one disk, and potentially more, can fail, and the data
can still be recovered. Modern RAID controllers often include the ability to have spare drives
standing by that are added into the array in the event of a drive failure. The remaining drives
in the array can then be used to rebuild the data from the failed drive on the new drive.
Of the various RAID levels, four are suited to SQL Server database servers. (More levels are
defined, but most are either rarely implemented or not suited for database servers.) Each of
these is briefly discussed here:
RAID 0. Also known as striping, this level involves a series of disks sharing the data load
across them. A portion of each stripe, or set of data, is written to each disk. Because each
disk operates independently and handles only a portion of the data set, performance
improves dramatically over a single disk. The downside to RAID 0 is that there is no
fault tolerance. If a single drive fails, all data is lost. A RAID 0 array can be formed from
two or more disks and all the space on both disks is used.
This level isnt usually recommended for production systems, although it can be a great
file system for intermediate operations such as extraction, transformation, and load
(ETL) temporary storage.
RAID 1. Also known as mirroring because the same information is written to each one
of a set of disks. Because the information is the same on each set of disks, there are two
benefits: read performance and reliability. A read can come from either disk, so whichever one responds first allows the SQL Server instance to receive the data and continue
processing. Having a complete copy of data on another disk means that there is fault
tolerance: A drive can fail, and the data will still be retrievable.
The disadvantages to this level are the disk space requirements and the write performance. Because the data must be written to both disks, write performance may be
decreased. Also because there are 2 disks, one for each side of the mirror, the cost for
disk storage doubles for SQL Server instances. You can form this type of array from pairs
of disks (two or more) joined together by the controller.
This level has great read performance and fault tolerance and is often used for SQL
Server transaction logs and tempdb database files.
RAID 5. Also known as striping with parity, this is the most common level of RAID
used in SQL Server database servers. In this type of array, you sacrifice one disk for
parity information that is calculated from the data striped across the other disks. The
parity information is shared across all disks, as is the data. This level provides a balance
between read-and-write performance because the data is striped, but all disks must be
read to retrieve the data. The cost isnt as high as RAID 1, because only one disk is used
for parity information. You can form a RAID 5 array from three or more disks.

232 | Lesson 10

RAID 5 provides a good balance between the cost of multiple disks and the performance
of RAID 1 for reads. Unless you need to build a very high level of performance into
your database files, this is a great choice for most database data files.
RAID 10. Also known as RAID 1+0 because it combines the RAID 1 and RAID 0 levels to get the benefits of both. This is one of the highest-performing RAID levels, but its
also one of the most expensive options. A minimum of four disks is required to implement RAID 10.
RAID 10 is the best choice in SQL Server instances where high performance is required
and the expense of this level can be justified.

DESIGNING A RAID ARRAY


Every SQL Server production instance on a server should have its disks protected with RAID
technology. The disks are the most fragile part of the database server and also the most critical
because they hold the data. A server can be rebooted to solve many problems, but the disks
are required to reload the data after SQL Server starts up.
In choosing to design your disk arrays, youll be forced to balance the performance you need
with the cost of the arrays. If cost isnt an issue, then you should implement RAID 10 everywhere to ensure high performance along with fault tolerance. Because this isnt usually the
case, you must first determine what performance requirements you need to meet and then
choose the highest level of fault tolerance you can afford.
The first decision you must make is whether the system will be primarily write oriented or read
oriented. Often, this comes down to the role of the server: that of a transactional, or writebased system; or that of a decision-support, or read-based, system. SQL Server database servers
are frequently On-Line Transaction Processing (OLTP) based even though there may be more
reads than writes. SQL Server Analysis Services servers usually involve many more reads than
writes. Youll need to do some benchmarking to determine the ratio of reads to writes.
If your system is primarily writes, then you should probably choose to implement RAID 5, or
RAID 10 if you can afford it. This gives you good write performance along with fault tolerance for disk-drive failures.
If your system contains many more reads, then you should choose RAID 1 if the disk cost
isnt unreasonable. Otherwise, RAID 5 with one or two extra disks (based on capacity)
balances this cost with performance.

LAB EXERCISE

Perform Exercise 10.2 in your


lab manual.

CERTIFICATION READY?
RAID is a technology for
providing redundancy
at the drive level. Watch
for exam questions that
discuss HA requirements
that are focused beyond
disk drives.

However you choose to design your arrays, have extra disks available in case of failures, preferably operating in standby mode if your RAID controller supports hot spares. You also need to
choose how to separate your data files, as discussed in Lesson 3.
In Exercise 10.2, youll walk through designing a series of RAID arrays for a SQL Server
instance. In this exercise, youll examine building a series of RAID arrays for an instance of
SQL Server. The decision has been made to build one 16 GB array for the OS and pagefiles,
one 70 GB array for the log files, one 500 GB array for the data files, and one 50 GB array
for the tempdb database. You have 35 GB and 70 GB drives available for the arrays.

DESIGNING A SAN STORAGE ARRAY


SAN technology is similar but different to a related technology known as Network Attached
Storage (NAS). Both technologies involve using one or more arrays of disks configured in
possibly multiple RAID arrays providing a large amount of centralized storage to possibly
multiple servers. SAN technology differs from NAS, in that it operates over a private network
as opposed to the normal network that most servers and clients use to communicate.
A SAN array is essentially a large set of disks managed by a high-performance controller,
which presents a mount point to various servers across a private network. Usually, the network is fiber-channel based with Host Bus Adapters (HBA) in each server connected to a
switch that in turn connects to the SAN device. The SAN device presents a logical disk drive
to the server, which appears to be a single disk to the Windows host but could be two, three,
or dozens of disks on the SAN device.

Designing a SQL Server Solution for High Availability | 233

TAKE NOTE

Most larger SAN solutions require extensive


vendor support and
arent normally available
as off-the-shelf products.
Be sure you take advantage of the design and
testing resources your
vendor offers.

SANs utilize a complex technology and require specialized training to set up and manage.
In many large organizations, a single person is often dedicated to managing the equipment.
A complete discussion of SAN technology is beyond the scope of this text; however, a DBA
should be aware of some basic principles and ensure that they are implemented if the DBA
will be managing SQL Server instances whose data is stored on a SAN device.

CHOOSING RAID LEVELS


Most, if not all, SAN disks are arranged into multiple RAID arrays, which are then presented to
the servers either whole or carved up with a portion presented as a single disk to each server. The
RAID recommendations presented earlier in this Lesson apply to setting RAID levels on a SAN.
Because a single RAID array can be presented to multiple servers with a SAN, the DBA
should be aware of this situation if it exists. Although many SAN vendors tout the high performance of their arrays, they often build a single large RAID 5 or RAID 10 array. Portions
of this large array are presented to each server for individual use. This can be a potential performance problem and should be avoided or tested thoroughly to be sure SQL Server wont
experience performance issues.

DESIGNING FAULT TOLERANCE


The SAN device will provide fault tolerance for the disk drives, but a few places can affect
SQL Server instances if they arent specifically addressed. The first of these is the HBAs in
your SQL Server. These rarely fail, but theyre potential points of failure; if possible, your
database server should have two HBAs with separate paths to the SAN device. This should
include separate fiber paths to separate switches for each HBA. This setup helps to ensure that
a single hardware failure on a cable or hardware device doesnt cause SQL Server to fail.

CERTIFICATION READY?
Remember that a SAN
is still a single storage
mechanism. Highavailability requirements
may necessitate storage
of data at multiple
physical locations.

Some SAN implementations have captive disks inside the Windows host that are used to boot
the operating system leaving the SAN disks for data storage. Others boot the Windows server
directly from the SAN disks. If your database server uses the former design, make sure those
captive disks are protected by RAID. If they fail, the Windows operating system will fail even
though the SQL Server data will be protected and available on the SAN.
The last part of designing a fault-tolerant SAN solution is ensuring that the SAN device has a
backup solution designed into it. Because SAN devices often implement multiple terabytes of
data, its crucial that this data, or at least your SQL Server data, is protected. You should use
either a second SANpreferably, geographically remote from the primary SANand/or a
tape backup solution to ensure that the data is available in the event the SAN device fails.

Designing a High-Availability Solution


Building an HA solution is usually a balance between the likelihood of a disaster event
occurring and your enterprises tolerance for downtime. Many companies can function if
their database server is down for a day, so using a development or other spare server and
rebuilding the SQL Server installation is a valid HA solution. For many other companies,
however, having their database server unavailable for an hour results in substantial costs
to the enterprise.
In either case, the decision to implement an HA solution for your SQL Server requires you to
analyze the risks of disaster and the cost of downtime in order to build a solution. The specific solution you choose will depend on your needs.
Your design should consider four basic considerations: failover time, automatic or manual
failover, the application requirements, and cost. Each of these will provide input into your
design and help determine what type of solution you implement. Keep in mind that all these
factors must be balanced against one another, because more stringent requirements in one
area usually lead to additional costs in another.

234 | Lesson 10

The failover time is one of the main factors that influences the type of solution youll implement. A failover time measured in hours means that you can choose almost any solution,
including building a server from scratch in a motel. However, a failover time in minutes or
seconds may mean that youre limited to clustering or mirroring. Because log shipping or replication requires manual intervention, the time required for an administrator to respond will
determine if you can use these technologies.

TAKE NOTE

If you require an administrator to respond to a disaster situation, make sure you test the
response time and enact rules to guarantee the response times can be met. For example,
you may want to ensure that the on-call administrator is never more than 30 minutes from
a computer.
Closely related to the failover time is whether automatic failover is required. If so, then youre
limited to clustering or mirroring unless you have and can spare programming resources to
build an automated solution.
Application issues also play a part in the HA solution you can build. Server instancelevel
protection often mandates failover clustering, although you can conceivably use database mirroring, log shipping, or replication on all databases that need protecting. If you require SQL
Server Agent, Notification Services, or Reporting Services to be fault tolerant, then you may
be limited to clustering unless you can build creative solutions that can handle your needs.
The applications ability to handle disasters also will influence your choice of technology. If an
application cant be modified to work with server-name changes or other addressing considerations of some technologies, clustering or database mirroring may be your only choice for a
solution.
Finally, you should weigh the cost of the technology against the benefits of the HA solution.
An application that will cost you $100 per hour of downtime may not justify the cost of a
$50,000 cluster. Each solution you design will potentially have additional hardware costs,
vendor support costs, licensing costs, employee costs for on-call or after-hour on-site support,
and more. The total cost for each solution should take all these items into account. The cost
of downtime and the risk of downtime occurring should be compared to the solution cost to
determine if the solution is worth implementing.
No matter which type of technology you choose, the SQL Server hardware should be built
with fault tolerance in mind. This usually means spare parts for the various components of
the database server, but it could also include RAID technology, spare network paths, and vendor SLAs to ensure that your SQL Server instance will continue to function in the event of a
disaster.
The main thrust of an HA plan is to limit the single points of failure as much as possible.
All the technologies discussed earlier are aimed at preventing a single database, server, or disk
from being a single point of failure. There are a few other items to consider in designing your
solution, discussed next.

PLANNING FOR NONTECHNICAL ISSUES


Building a technical solution to an HA problem is the easy part. Deciding on a technology,
contracting for remote locations and services, and configuring software are all relatively
straightforward processes to complete. Other issues that can arise in a disaster situation,
however, are more difficult to plan for and may not be easily mitigated.
The biggest issues usually involve staffing in a disaster situation. This can be a small-scale
disaster where the DBA is hurt in a fire, is injured by tripping over the server power cord, or
for some other reason is left unable to respond to the issue. Or it may be a large-scale problem like those experienced during Hurricane Katrina in 2005, where companies found that
large portions of their staff were unable to report for work because of evacuations or personal
issues from the storm.

Designing a SQL Server Solution for High Availability | 235

Recognizing that your staff is critical to the successful continuation of operations in the event
of a disaster involves two phases. First, you need to ensure that processes and procedures are
documented and employees are cross-trained. Doing so helps prevent any one person from
being a single point of failure.

TAKE NOTE

Its often hard to ensure that technical employees dont make themselves a single point of
failure. Sometimes theyre averse to documenting too much of their job for fear of working
themselves out of a position. Show your employees that theyre valuable in spite of the fact
that you have someone else who can perform their job.
The second part of mitigating staffing issues is planning for the problems people may experience and helping them work through those issues. Rotating shifts, providing help for families,
or other means of ensuring that your staff is able and willing to help the company through a
disaster situation can mean the difference between your database continuing to function or
never coming online again.

CONSIDERING REPORTING ISSUES


A common request from clients and managers is that the secondary server in an HA solution
be made available for reporting purposes. The logic is that because a separate copy of the data
exists and a server is sitting idle, it should be used if at all possible for another function.
Table 10-3 lists the possibilities for using the secondary server for reporting with each of the four
HA technologies. Keep in mindand caution your clients or coworkers aboutthe impact of a
failover event on the reporting server. You must determine whether reporting will still be allowed
(or possible) on the reporting server if there is a failover from the primary server.
Table 10-3
Reporting options for the
secondary HA server

HA T ECHNOLOGY

R EPORTING S ERVER O PTIONS

Failover clustering

Not available for reporting.

Database mirroring

Not directly available, but database snapshots can be scheduled on


the mirror server and used for reporting.

Log shipping

Secondary database(s) can be restored with the STANDBY option


and used for reporting. Reporting is unavailable during restores.

Replication

Secondary database is available for reporting and, potentially, updates


if merge or bidirectional replication is used. Secondary is always
available.

Your choice for a reporting solution should take advantage of the potentials of each technology
for meeting this need. However, it should be a secondary criteria for choosing a solution
meeting your HA needs should be the primary objective.

TAKE NOTE

You can combine the HA technologies to achieve your needs, especially for reporting. Log
shipping or replication can be combined with database mirroring or clustering to build a
reporting server.

Developing a Migration Strategy


THE BOTTOM LINE

A migration strategy is needed in order to transition from a single server configuration to


one of the HA configurations.

236 | Lesson 10

The last part of building an HA system is moving your current environment to a highly
available one. This section assumes that you have a system running that isnt fault tolerant,
and you wish to move the system to an environment that is designed to function in the event
of a disaster.
Because the system you design can be as simple or complex as your resources allow, its impossible to discuss all possibilities, but some general guidelines can help you ensure a smooth
transition to the new environment.

Testing Your Migration


The migration to a new solution can take any number of paths, depending on how your
old and new systems are architected. The method you choose to perform this migration
also depends on the skills of your staff at quickly moving the application and other
factors discussed.

However you choose to perform the migration, its critical that you test the plan. Your HA
solution will probably be with new servers, so set up a development or spare server as close as
possible to the existing SQL Server database server and test the steps for moving the database,
jobs, logins, and so on. Ideally, you should capture a real-world load using Profiler on the
production server and replay that during the migration test to be sure the workload can be
executed.
There may be a few or hundreds of steps to perform the migration, and you should document the order in which they occur as well as who should perform them. Doing so will help
ensure that the process proceeds smoothly.
Finally, you should test the failover and failback after the migration. Failback is the reverse of
failover in that failback refers to transferring the role of the principal server or database back to
the original server or database. Moving to an HA system makes sense only if youre sure that
the failover in the event of a disaster, and the failback later, can be performed when needed.

Minimizing Downtime
Depending on which technology you choose and the structure of your current environment, its possible to eliminate any downtime for the application. If youre adding to your
existing environment, such as implementing log shipping, mirroring, or replication on
an existing SQL Server instance, you can add these items to your database and initialize
them without interrupting your applications access to the database.
Even if youre choosing to move to new hardware with one of these technologies, you could
conceivably add the technology as youre moving from the old server to the new one and then
fail over to the new server. This should be done only after some testing of the solution, but
it can move you onto a new server transparently to the application. You can then reconfigure
the old server or even replace it with another server and reconfigure the failover.
If youre implementing a cluster from a previously unclustered solution, youll require downtime to move the data and bring it up on the cluster. On a SAN, this process can be as simple
as presenting the same disks to the new server, thereby minimizing downtime. If not, then
you should perform extensive testing and documentation of the process for migrating the
application to ensure that the actual migration occurs as smoothly as possible.

Designing a SQL Server Solution for High Availability | 237

Implementing Address Abstraction


Starting with Active Directory in Windows 2000, Microsoft moved away from the older
single network addressing scheme of NETBEUI and WINS to the more widely used
Domain Name System (DNS) for name mapping. This abstraction enables the underlying server IP address to change without affecting the ability of clients to access the server.
SQL Server clients typically address the server by its Windows name, which is unique on
the network and mapped or associated by DNS with an IP address.
One way to ensure a highly available system is to prevent a dependency on a particular server
name. This eases migrations to new hardware, implementation of clustering, failovers with log
shipping, and more. You can easily do this two ways in most Microsoft environments: using
DNS or using a load-balancing technology. Both of these function in a similar way to abstract
the server addressing from the actual server.
If you use DNS to abstract your server address, you should create a specific host name separate
from the server name that your clients will use to connect to the server. For example, if
dkranch.com is your domain in Active Directory and your SQL Server instance is hosted on the
SQLProd01 Windows server, the typical AD name of this server is SQLProd01.dkranch.com.
If you create a sqlprod.dkranch.com host name and link this to the IP address of SQLProd01,
then if you need to migrate to SQLProd02.dkranch.com, you can edit the DNS entry for
sqlprod.dkranch.com, and your clients will automatically connect to the new server.
The other method is to use a local-balancing scheme that routes clients to one or many servers taking part in the load balancing. Microsoft offers Network Load Balancing as a feature of
Windows Server 2003, and many hardware devices perform the same function. In choosing one
of these for a database server, be sure you configure all clients to connect to the primary server
only by default. The secondary server should receive client requests only if the primary has failed.
If you have an abstraction solution in place, your migration to an HA system should be a
simple matter of editing the abstracted name.

Training Your Staff


One item thats often forgotten in planning for a new technology implementation is the
training of your staff to support the technology. If you purchase a vendor solution, or
a single employee designs and tests the solution, then others are often only peripherally
involved and unable to support the solution on their own.
A highly available system requires that the single points of failure be minimizedand this
includes your employees. Be sure you include time and budget to have at least two (and
preferably all) on-call employees trained.

S K I L L S U M M A RY
A highly available system is unique for many companies, involving those technologies and
processes that guarantee the system functions at the necessary level for the enterprise. The
scale and capabilities of the secondary system that is used if the primary system is unavailable
depend on the requirements of your organization.
Each of the four technologies available in SQL Server to implement highly available database
servers has its own features, disadvantages, and costs. The technology that is appropriate for
your application depends on the businesss needs for that application. No single technology is
right for all applications.

238 | Lesson 10

As you design high availability into your database servers, make sure you consider all
technologies equally in determining which one is best suited for your application. Examine the
entire system, from hardware to network to staff outside of the four technologies, to ensure
that the entire application has as few single points of failure as possible.
For the certification examination:
Understand the SQL Server failover clustering. You should know the capabilities and
limitations of failover clustering as an HA technology in SQL Server. Also know the
requirements of this technology over and above those of non-clustered SQL Server.
Understand SQL Server database mirroring. You should understand how database
mirroring works in SQL Server and when its appropriate to use as an HA technology.
Know when to use log shipping. You should understand where and when log shipping can
be used to build a highly available system.
Understand how replication can be used in an HA system. Know which types of replication
can be used to build an HA system as well as the limitations of choosing this technology.
Identify single points of failure. You should understand what a single point of failure is
and how to identify the single points of failure in your system.
Know how to migrate your application to an HA environment. You should understand how
to develop a migration plan for your application to move to an HA environment.

Knowledge Assessment
Case Study
Eds Heavy Equipment
Ed Harvey started an equipment-rental business in southeastern Virginia that specializes in home garden and tractor equipment. The company has equipment in a number
of home stores that customers can rent to use in their gardens, farms, or yards. Remote
terminals in each store communicate with a central office where the database server
tracks all rentals.

Planned Changes
The business has grown substantially, and now Ed wants to enable customers to reserve
equipment over the Internet as well as at stores. He wants to be sure his business continues to function even if a disaster occurs at one store or the central office.
The IT staff wants to be sure they choose the best combination of technologies to build
an HA system while being careful of the overall cost of the system.

Existing Data Environment


Currently, a single SQL Server 2005 server named SQLRentals is located at the central
office, and all clients connect to this server by its Windows name. This server contains a
central database that stores client and rental information and is backed up nightly.
A number of jobs run under SQL Server Agent and send emails to drives to notify them
to move equipment between stores. Because there can be delays in getting equipment
transferred, these jobs usually send emails three days before the equipment is needed
and continue to send emails until the destination marks the equipment as in stock.

Designing a SQL Server Solution for High Availability | 239

Existing Infrastructure
There is a single Active Directory domain to which all employees authenticate.
The current SQL Server 2005 server hardware is adequate, but a new server is expected
to be purchased this year to increase performance.
Each store has its own client machines, at least two per store, to connect to the central
office across a high-speed private network. Each store can also communicate with all
other stores via instant messaging, so clerks can send messages to each other.
Every store has room for its own server. The business considered this option initially,
but decided against it.

Business Requirements
Ed wants to be sure that if something happens to the server in the central office, all
clients can continue to connect to this server without interruption.
There is a remote possibility that the central office could go offline because of construction work in the area. Ed has arranged for another web server and separate connection
to the Internet at the Chesapeake store. This web server currently connects through the
private network to the central office. If the central office loses its Internet connection,
the Web site should continue to accept reservations. A delay of an hour or two to get
this running is acceptable.
Clients can be reconfigured to connect to another server if a long outage for the central
office is expected, but this isnt acceptable for short-term problems.

Technical Requirements
The solution designed should take advantage of SQL Server 2005 HA technologies to
meet the business requirements.
A few new servers can be purchased, but separate servers cant be placed in every store.
The private network provides adequate connections between stores for all client traffic
in the event of a disaster at the central office. It cant support disk-access traffic.
The ISP for the company provides load balancing from the Internet for both web
servers using the two separate connections from the central office and the Chesapeake
store. However, the internal connection from the web servers to the database server
is managed by the internal IT staff. If the database services move to a new server, the
connections should transition easily to the new server.

Multiple Choice
Circle the letter or letters that correspond to the best answer or answers.
Use the information in the previous case study to answer the following questions.
1. To ensure that the central office is adequately protected, new server hardware is being
purchased. Which technology would you choose to protect the database and ensure that
all SQL Server Agent jobs continue to run if the primary server has problems?
a. Failover clustering
b. Database mirroring
c. Log shipping
d. Replication

240 | Lesson 10

2. You decide to implement automatic failover from the central office for the application
in case that office goes offline. Which technology is best suited for this?
a. Failover clustering
b. Database mirroring
c. Log shipping
d. Replication
3. To ensure that all clients can redirect to a new server in the event of a disaster, how
should you set up the new servers?
a. Set the VIP to SQLRental01 on the cluster, and name the mirror server
SQLRental02.
b. Set up a DNS host as SQLProd, and direct it to the cluster. In the event of disaster, it
can be moved to the secondary server.
c. Change the application to try the primary cluster node Windows name first and then
the secondary cluster node Windows name.
d. This cannot be done with SQL Server 2005.
4. One of the senior managers wants to consider the possibility of having multiple failover
databases to ensure that two failures do not stop the business. Which technologies can
you use to meet this objective? (Choose all that apply.)
a. Failover clustering
b. Database mirroring
c. Log shipping
d. Replication
5. The application supports mostly OLTP traffic, with a good mix of reads and writes. Your
server has five drives in it, and you want to ensure that you protect the data as well as
have as much storage as possible. Which type of RAID should you choose?
a. RAID 0
b. RAID 1
c. RAID 5
d. RAID 10
6. Which of the following can you leave out of your HA design, given the fact that the
company has never used clustering?
a. SAN devices
b. Staff training on clustering
c. RAID technology
d. A secondary server
7. You are considering separating the store rentals from the Internet customer reservations
in two databases. To do this while ensuring that your system is still fault tolerant and
with minimal application changes, which technology should you choose?
a. Failover clustering
b. Database mirroring
c. Log shipping
d. Replication
8. You decide to implement database mirroring between two servers after upgrading all
clients to use the SQL Native Client that ships with SQL Server 2005. What do you
need to do to support automatic failover?
a. Build retry code into the application to try the primary server and then connect to
the secondary server.
b. Use two connection strings, one for each server, and have the application try both
each time it runs a query.
c. Add the secondary server into the connection string as the secondary databasemirroring server.
d. Put both servers behind a load-balancing device to handle this.

Designing a SQL Server Solution for High Availability | 241

9. The application has been modified to have customer reservations connect to one server
and store reservations connect to a second server with merge replication moving data
between the servers. Customer orders from the Internet should take precedence over
store orders if the customer has an account. How can you ensure that this happens?
a. Set up the replication to always start with the Internet server and send data to the
store server.
b. Use custom code to resolve replication conflicts.
c. Set the priority of the Internet server lower than that of the store server.
d. Have the Internet application write to both servers.
10. As a secondary plan to your clustering solution, you decide to have log shipping send
copies of the transaction logs to the Chesapeake store. To enable managers to query
this database and not load the primary database, what option should you use with the
restores?
a. WITH STANDBY
b. WITH RECOVERY
c. WITH REPORTING
d. WITH ONLINE

11

Designing a Data
Recovery Solution
for a Database

LESSON

L E S S O N S K I L L M AT R I X
TECHNOLOGY
SKILL

EXAM
OBJECTIVE

Specify data recovery technologies based on business requirements.

Foundational

Analyze how much data the organization can afford to lose.

Foundational

Analyze alternative techniques to save redundant copies of critical business data.

Foundational

Analyze how long the database system or database can be unavailable.

Foundational

Design backup strategies.

Foundational

Specify the number and location of devices to be used for backup.

Foundational

Specify what data to back up.

Foundational

Specify the frequency of backup.

Foundational

Choose a backup technique.

Foundational

Specify the type of backup.

Foundational

Choose a recovery model.

Foundational

Create a disaster recovery plan.

Foundational

Document the sequence of possible events.

Foundational

Create a disaster decision tree that includes restore strategies.

Foundational

Establish recovery success criteria.

Foundational

Validate restore strategies.

Foundational

KEY TERMS
business continuity plan (BCP):
A policy that defines how an
enterprise will maintain normal
day-to-day operations in the event
of business disruption or crisis.
decision tree: A technique for
determining the overall risk
associated with a series of related
risks; that is, its possible that
certain risks will only appear
as a result of actions taken in
managing other risks.

disaster recovery plan (DRP):


A policy that defines how people
and resources will be protected in
the case of a natural or man-made
disaster and how the organization
will recover from the calamity.
media retention: A period of
time such as a year, a month, or
a week for which any backup
media is not altered and kept in
that state in which it was created.
After this retention period the

media is allowed to be reused for


another new backup.
recovery model: A database
option that specifies how the
write ahead transaction log
records events; the options are
simple, bulk logged, and full.
These settings influence your
protection against data loss.

242

Designing a Data Recovery Solution for a Database | 243

So far in this textbook youve learned several key aspects about designing your SQL Server
database infrastructure. These have included considerations of physical design, hardware
needs, security issues, and so on. But no matter how well you design, plan, anticipate, and
prepare, things inevitably go wrong, and disaster strikes. The permanent loss of data is a
catastrophic event that can cripple an organization.

Given those considerations, it isnt surprising to discover that one of the primary responsibilities of a database administrator is to secure the information contained in the user databases.
This responsibility consists of several tasks, including designing for fault tolerance, developing
a data restoration strategy that anticipates disaster, and securing the data.
Not having a reliable, carefully thought out disaster plan is an open invitation to catastrophe
and borders on inexcusable criminal negligence. Its as if you decided to jump out of an
airplane without a parachute, expecting the trees below to catch you. It might work, but
its not a viable plan. In this Lesson, youll examine how to plan a data recovery strategy for
databases, including a backup and restore plan.
Although youll primarily focus on best practices, you should make a habit of establishing and
applying general principles when planning and using data-recovery techniques across your
infrastructure rather than trying to design a different plan for each database. Establishing and
using general principles can save you both time and money.
For example, assume you decided to design an individualized data-recovery strategy for
each database in your system. Doing so would probably result in having procedures that
vary from server to server. The subsequent plan would be unnecessarily complex. Avoiding
this is a simple matter of generalizing the principles involved and taking into account the
requirements of all your databases and applications. Armed with these principles, you can
easily design a single data-restoration strategy that can apply to the entire infrastructure.
A good data-recovery strategy starts with the premise that no matter what is done, every database will need data restoration at some point in its life cycle. The role of a database administrator is to create an infrastructure plan that allows you to minimize how frequently data
recovery is needed, keep an eye out for developing problems before they develop into disasters,
and have a contingency for every possibility. The plan should also let you proceed quickly to
restoration when disasters do occur and promptly verify that the restoration was successful.
The next section reviews some basics about backup and restoration, as well as the different
types.

Backing Up Data

THE BOTTOM LINE

Redundancy is key to surviving a loss: a backup power supply, a second NIC, a standby or
failover server in another physical building, a hot site in another state, personnel trained in each
others duties, and so on. Here the emphasis focuses on a second, or even a third, copy of your
data and copies of data-entry worksheets so that you can recover to the millisecond of that loss.

A backup is a copy of data that is stored somewhere other than the hard drive of your computer, usually on tape or another device, such as a hard drive on another computer connected
over a local area network (LAN), somewhere that wont suffer the same consequences as the
primary site. There are three basic reasons to back up data:
The possibility of hardware failure. Despite significant advances in reliability, hardware
fails, often spectacularly and more often than not with an uncanny knack for happening
at exactly the wrong time. If you dont want to come to work one day and discover
that everything is missing because a hard drive went bad, you should always perform
regularly scheduled backups.

244 | Lesson 11

The chance of external disasters, whether natural or man made. No matter how
much redundant hardware you have in place, its not likely to survive a tornado, a
hurricane, an earthquake, a flood, or a fire. Although the possibility is slight, man-made
disasters, such as a terrorist attack or an act of war, can be catastrophic.
Human malevolence. A large number of data disasters can be traced back to insiders.
Far too often, employees who are angry with their boss or their company seek revenge by
destroying or maliciously changing sensitive data. This is the worst kind of data loss, and
the only way to recover from it is by having a viable backup.
Now that you know why you should back up your data, you need to learn how to do it.

TAKE NOTE

You can use the Database Maintenance Plan Wizard to schedule backups to run automatically. To access the wizard in Management Studio, expand Management, right click the
Maintenance Plans Folder, and then select Maintenance Plan Wizard.

CREATING A BACKUP DEVICE


To do any kind of backup, you may choose to create a backup device, which is a place to put
the backed up data. For example, the tape drive or disk drive you use in a backup or restore
operation is a backup device. Microsoft SQL Server can back up databases, transaction logs,
and files to disk (local or over a network connection) and tape devices.
SQL Server isnt automatically aware of the various forms of media attached to your server,
so you have to tell it where to store the backups. You can create two types of backup devices:
permanent and temporary.
Temporary backup devices are created on the fly when you perform the backup. Theyre
useful for making a copy of a database to send to another office so that they have a complete
copy of your data. Temporary backup devices can also be used to make a copy of your
database for permanent offsite storage (usually for archiving).
Permanent backup devices can be used over and over again; you can also append data
to them. These attributes make them the perfect device for regularly scheduled backups.
Permanent backup devices are created before the backup is performed and, like temporary
devices, can be created over the network or a locally accessible hard disk or to a local tape
drive.

LAB EXERCISE

Perform Exercise 11.1 in your


lab manual.

TAKE NOTE

LAB EXERCISE

Perform Exercises 11.2 and 11.3


in your lab manual.

In Exercise 11.1, youll create a permanent backup device.

If you use a tape drive as a backup medium, it must be physically attached to the SQL
Server machine.

PERFORMING FULL BACKUPS


As you might guess from the name, a full backup is a backup of the entire database that
includes the database files, the locations of those files, and portions of the transaction log
(from the log sequence number [LSN] recorded at the start of the backup to the LSN at
the end of the backup). This is the first type of backup you need to perform in any backup
strategy because all the other backup types depend on the existence of a full backup. In other
words, you cant do a differential or transaction log backup until youve done a full backup. A
full backup is sometimes called a baseline in a backup strategy
To create a sample baseline, in Exercise 11.2 youll back up the AdventureWorks database
to the permanent backup device you created in the previous section of this Lesson. Then, in
Exercise 11.3, youll back up the database using T-SQL commands.
As you saw previously, once you have a full backup, you can perform other backup types.

Designing a Data Recovery Solution for a Database | 245

PERFORMING DIFFERENTIAL DATABASE BACKUPS


A differential backup is a copy of all changes made to a database since the last full backup
was performed. This includes all changes to data and database objects. A differential database backup records only the most recent change to a data record if a particular data record
has changed more than once since the last full backup (unlike a transaction log backup that
records each change). A differential backup takes less time and less space than a full backup
and is used to reduce database restoration times.
SQL Server figures out which pages in the backup have changed by reading the last LSN of
the last full backup and comparing it with the data pages in the database. If SQL Server finds
any updated data pages, it backs up the entire extent (eight contiguous pages) of data, rather
than just the page that changed.

TAKE NOTE

Because each differential backup records all changes since the last full database backup, only
the most recent differential backup is required for restoration of data.

Differential database backups are best used with medium to large databases in between scheduled full database backups. As the length of time required to perform a full database backup
increases, performing differential backups obviously becomes more useful. Differential backups are particularly useful in speeding up data restoration times in databases where a subset of
data changes frequently and results in large transaction logs.
Performing only full and differential backups isnt enough. If you dont perform transaction
log backups, your database could stop functioning.

PERFORMING TRANSACTION LOG BACKUPS


A transaction log backup is a sequential record of all transactions recorded in the transaction
log since the last transaction log backup. Transaction log backups enable you to recover the
database to a specific point in time. This can be useful if, say, you want to restore the database
to just before the entry of incorrect data.
Even though theyre completely dependent on the existence of a full backup, transaction log
backups dont back up the database itself. They only record sections of the transaction log,
specifically since the last transaction log backup. The best way to think of a transaction log
backup is as a separate object. Then it makes sense that SQL Server requires a backup of the
database as well as the log.
The length of time required to back up the transaction log will vary significantly depending
on the rate of database transactions, the recovery model used, and the volume of bulk-logged
operations. On databases with very high transaction rates and fully logged bulk operations, the
transaction log backup can be bigger than a full database backup, and frequent transaction log
backups may be required to regularly truncate the inactive portion of the transaction log.
Because a transaction log backup records only changes since the previous transaction log
backup, all transaction log backups are required for restoration of data.
In addition to the fact that a transaction log is an entity unto itself, there is another important
reason to back it up. When a database is configured to use the full or bulk-logged recovery
model, a transaction log backup is the only type of backup that clears old transactions out of
the transaction log; full and differential backups can only clear the log when the database being
backed up is configured to use the simple recovery model. Therefore, if you were to perform
only full and differential backups on most production databases, the transaction log would
eventually fill to 100 percent capacity, and your users would be locked out of the database.

TAKE NOTE

When a transaction log becomes 100 percent full, users are denied access to the database
until an administrator clears the transaction log. The best way around this is to perform
regular transaction log backups.

246 | Lesson 11
LAB EXERCISE

Perform Exercise 11.4 in your


lab manual.

CERTIFICATION READY?
The FULL, DIFFERENTIAL,
and TRANSACTION LOG
backup types are the
fundamental types of
backups. Know how they
relate to each other.

In Exercise 11.4, youll perform a transaction log backup.


Although full, differential, and transaction log backups work well for most databases, another
type of backup is specifically designed for very large databases that are terabytes in size:
filegroup backups.

PERFORMING FILEGROUP BACKUPS


A growing number of companies have databases that are reaching the terabyte (TB) range
and beyond. These databases are called, logically enough, very large databases (VLDBs). If
you tried to a back up a TB-sized VLDB on a nightly, or even weekly, basis, youd probably
quickly become frustratedeven with the fastest state-of-the-art equipment, the backups
would take a long time. To get around that issue, Microsoft has provided a method to back
up small sections of the database: the filegroup backup.
A filegroup is a way of storing a database on more than one file. It also gives you the ability
to control in which of those files your objects (such as tables or indexes) are stored. Hence, a
database doesnt have to be on only one physical disk; it can be spread out across many disks,
with nearly unlimited growth potential.
A filegroup backup is a copy of each data file in a single filegroup. It also includes all database
activity that occurred while the file or filegroup backup was in process. A filegroup backup can
be used to back up one or more of those files at a time rather than the entire database at once.
This type of backup takes less time and space than a full database backup. Its used for
VLDBs that are too large to be backed up in a reasonable amount of time (such as in a
24-hour period). In a VLDB, you can design the database so that certain filegroups contain
data that changes frequently and other filegroups contain data that changes infrequently
(or perhaps is read only). Using this design, you can use a filegroup backup to perform
frequent backups of the data that changes frequently and to perform occasional backups of
the infrequently changing data. By splitting the backup into segments, you can perform the
necessary backups in the available backup window and achieve acceptable restoration times.
With VLDBs, a filegroup can be restored much faster than an entire database. One nice
thing about filegroup backups is that multiple backups can be in parallel to multiple physical
devices, which significantly increases backup performance.
A caveat: Filegroup backups require careful planning because you should back up and restore all
the related data and indexes together. In addition, a full set of transaction log backups is required
to restore filegroup backups to a state that is logically consistent with the rest of the database.

TAKE NOTE

If the tables and indexes are stored on separate files, the files must be backed up as a single
unit. You cant back up the tables and the associated indexes separately.

Restoring Databases

THE BOTTOM LINE

The purpose of all backups is to provide backup copies of data that can be used by restore
processes. It is essential that backups be designed with restoration in mind. It is equally essential for database administrators to be familiar with the various types and methods of restoring
data in databases.

One of the most anxiety-provoking sights is a database thats graphically displayed in


Management Studio with the word Shutdown in parentheses next to it. This means something
bad, probably a corrupt disk, has happened to the database. It also means youre going to have
to perform a restore of your last backup.

Designing a Data Recovery Solution for a Database | 247

Suspect or corrupt databases arent the only reasons to perform restores, though. You may, for
example, need to send a copy of one of your databases to the main office or to a branch office
for synchronization. You may need to recover from mistaken or malicious updates to the data.
These reasons, and many others, make it important for you to know how to perform restores.

UNDERSTANDING THE GENERAL RESTORE STEPS


Although every data restoration scenario is different, there are several common steps you
should take when you need to restore data because of a database failure:
Attempt to back up the transaction log. Always try to create a transaction log backup
after a database failure so that you can capture all the transactions up to the time of the
failure. You should use the NO_TRUNCATE option, which backs up the log when the
database is unusable. If you successfully back up transactions to the point of the failure,
restore this new transaction log backup set after you restore the other transaction log
backups.
Find and fix the cause of the failure. To do this, you need to follow both SQL Servers
and the operating systems troubleshooting procedures. You obviously want to find the
source of the problem so you can correct the problem (if possible) and take the necessary
steps to prevent it from happening again.
Drop the affected databases. Before the database can be re-created, it should first be
dropped so that any references to bad hardware are deleted. You can delete it using either
Management Studio or the T-SQL command DROP DATABASE <database>. If a hardware problem isnt the reason youre restoring, you dont need to drop the database.
Restore the database. You can use Management Studio to restore databases quickly.
Highlight the database to be restored, right click it, choose Tasks, then select Restore,
then select Database, select the backup to restore, and click OK. If a database doesnt
exist but you have a backup of it before it was deleted, you can re-create it by restoring
the backup.
Using T-SQL makes sense when you want to restore a database that doesnt already exist. If a
database by the same name as the database in the backup set already exists, it will be overwritten.
To restore a backup set to a differently named database, use the REPLACE switch.
Although the syntax to do a restoration starts out simply, you can use many options to control
exactly what is restored from which backup set.
The syntax to do a restoration is as follows:
RESTORE DATABASE <database> FROM <device> <options>

These are the most common options:


RESTRICTED_USER. Only members of db_owner, dbcreator, or sysadmin roles can
access the newly restored database.
RECOVERY. Recovers any transactions and allows the database to be used. This is the
default if no options are specified.
NORECOVERY. Allows additional transaction logs to be restored, and also doesnt
allow the database to be used until the RECOVERY option is used. Basically, the
NORECOVERY switch lets you restore multiple backups onto the same database prior
to bringing the database online.

PERFORMING STANDARD RESTORES


Restoring a database doesnt involve very many steps, but there is one very important setting you need to understand before undertaking the task. The RECOVERY option, when
set incorrectly, can thwart all your efforts to restore a database. The RECOVERY option tells
SQL Server that youre finished restoring the database and that users should be allowed back
in. This option should be used only on the last file of the restore process.

248 | Lesson 11

For example, if you perform a full backup, then a differential backup, and then a transaction
log backup, you need to restore all three of those to bring the database back to a consistent
state. If you specify the RECOVERY option when restoring the differential backup, SQL
Server wont allow you to restore any other backups; you have told SQL Server in effect that
youre done restoring and that it should let everyone start using the database again. If you
have more than one file to restore, you need to specify NORECOVERY on all restores except
the last one.
SQL Server also remembers where the original files were located when you backed them up.
If you back up files from the D: drive, SQL Server restores them to the D: drive. This is great
unless your D: drive has failed and you need to move your database to the E: drive, or if you
need to change the location for any reason. In this instance, you need to use the MOVE . . .
TO option. MOVE . . . TO lets you back up a database in one location and move it to
another location.
Finally, before SQL Server will let you restore a database, SQL Server performs a safety check
to make sure you arent accidentally restoring the wrong database. The first thing SQL Server
does is compare the database name that is being restored with the name of the database
recorded in the backup device. If the two are different, SQL Server wont perform the restore.
For example, if you have a database named Accounting on the server, and youre trying to
restore from a backup device that has a backup of a database named Acctg, SQL Server wont
perform the restore. This is a lifesaver, unless youre trying to overwrite the existing database
with the database from the backup. If that is the case, you need to specify the REPLACE
option, which is designed to override the safety check.
LAB EXERCISE

Perform Exercises 11.5 and 11.6


in your lab manual.

In Exercise 11.5, youll disable a database, and in Exercise 11.6, youll perform a
simple restore.
This type of restore is useful if the entire database becomes corrupt and you need to restore
the whole thing. However, what if only a few records are bad, and you need to get back to the
state the database was in just a few hours ago?

PERFORMING POINT-IN-TIME RESTORES


Its not uncommon to be asked to reset the data back to a previous state, such as at the end
of the month, when accounting closes out the monthly books. This is possible if youre doing
transaction log backups, in which case you can perform a point-in-time restore.
In addition to stamping each transaction in the transaction log with a log sequence number
(LSN), SQL Server stamps them all with a time. That time, combined with the STOPAT
clause of the RESTORE statement, makes it possible for you to bring the data back to a
previous state.

TAKE NOTE

This process only works with transaction log backups, not full or differential backups.
In addition, youll lose any changes that were made to your entire database after the
STOPAT time.

Another type of restore comes in handy for VLDBs: piecemeal restores. Piecemeal restores
were implemented with SQL Server 2005 and augment the concept of partial restores
introduced in SQL Server 2000.

TAKE NOTE

Another option available is to do a restore up to a specific log sequence number (LSN).


This could allow you to restore right up to and including a specific transaction without
knowing exactly when it occurred.

Designing a Data Recovery Solution for a Database | 249

PERFORMING PIECEMEAL RESTORES


Piecemeal restores are used to restore the primary filegroup and (optionally) some secondary
filegroups and make them accessible to users. Remaining secondary filegroups can be restored
later if needed.
Every piecemeal restore starts with an initial restore sequence called the partial-restore
sequence. Minimally, the partial-restore sequence restores and recovers the primary filegroup
and, under the simple recovery model, all read/write filegroups. During the piecemeal-restore
sequence for other than Enterprise Edition, the whole database must go offline. Thereafter,
the database is online and restored filegroups are available. However, any filegroups that have
not yet been restored, remain offline.
Regardless of the recovery model that is used by the database, the partial-restore sequence starts
with a RESTORE DATABASE statement that restores from a full backup and specifies the
PARTIAL option. The PARTIAL option always starts a new piecemeal restore; therefore, you
must specify PARTIAL only one time in the initial statement of the partial-restore sequence.
When the partial restore sequence finishes and the database is brought online, the state of the
remaining files becomes recovery pending because their recovery has been postponed.
Subsequently, a piecemeal restore typically includes one or more restore sequences, which
are called filegroup-restore sequences. You can wait to perform a specific filegroup-restore
sequence for as long as you want. Each filegroup-restore sequence restores and recovers one
or more offline filegroups to a point consistent with the database. The timing and number of
filegroup-restore sequences depends on your recovery goal, the number of offline filegroups
you want to restore, and on how many of them you restore per filegroup-restore sequence.
With the mechanics of backing up and restoring under your belt, youre ready for a discussion
of theory. You need to know not only how but also when to use each of these types of backups. You need to devise a viable backup strategy.

Devising a Backup Strategy

THE BOTTOM LINE

A backup strategy is a plan that details when to use which type of backup. For example,
you can use only full backups, full with differential, full with transaction log backups, or
any other valid combination. Your challenge is to figure out which one is right for your
environment. Examine the pros and cons of each type of strategy.

PERFORMING FULL BACKUPS ONLY


If you have a relatively small database, all you really need to do is perform full backups,
and youre done. What is a relatively small database? There is no hard-and-fast rule; whats
important is the size of a database relative to the speed of the backup medium. For example,
a 500 MB database is fairly small, but if you have an older tape drive that isnt capable of
backing up a 500 MB database overnight, you wont want to perform full backups on the
tape drive every night. In that case, effectively its not a relatively small database, and you
need to think of a different strategy. On the other hand, if you have hardware capable of a
10 GB backup in an hour, you can consider a full-backups-only strategy, even though the
database is twenty times larger than the database in the other example.
The major advantage of a full-only strategy is that the restore process is faster than with other
strategies, because it uses only one backup set. For instance, if you perform a full backup
every night and the database fails on Thursday, all you need to restore is the full backup from
Wednesday night. With other strategies, the restore take more time because you have more
backup sets from which to restore.
A disadvantage of a full-only strategy is that it gives a comparatively slow and larger backup
compared to the other strategies. For example, if you perform a full backup every night on
a 500 MB database, youre backing up 500 MB every night. If youre using differential with

250 | Lesson 11

full, you arent backing up the entire 500 MB every night, which is faster and requires less
disk space. A differential backup will often be a small percentage of the full backup size. This
could be only perhaps 10 MB of the 500 MB example database.
Another disadvantage of the full-backups-only strategy involves the transaction log. As you
saw earlier, the transaction log is cleared only when a transaction log backup is performed.
With a full-only strategy, your transaction log is in danger of filling up and locking your users
out of the database. To avoid this problem, you can set the recovery model to simple.
Another option is to perform the full backup and, immediately afterward, perform a transaction log backup with the TRUNCATE_ONLY clause. With this clause, the log wont be
backed up, just emptied. Then, if your database crashes, you can perform a transaction log
backup with the NO_TRUNCATE option. The NO_TRUNCATE option tells SQL Server
not to erase whats in the log already so that its contents can be used in the restore process.
This approach gives you up-to-the-minute recoverability as well as a clean transaction log.

TAKE NOTE

The first thing you should do in the event of any database failure is use the
NO_TRUNCATE option with the transaction log backup to save the orphaned log.

PERFORMING FULL WITH DIFFERENTIAL BACKUPS


If your database is too large to perform a full backup every night, the best plan is to add
differentials to the strategy. A full/differential strategy provides a faster backup than full
alone. With a full-only backup strategy, youre backing up the entire database every time you
perform a backup. With a full/differential strategy, youre backing up only the changes made
to the database since the last full backup.
The major disadvantage of the full/differential strategy is that the restore process is slower
than with full-only, because full/differential requires you to restore more backups. Suppose
you perform a full backup on Monday night and differentials the rest of the week. Your database crashes on Thursday. To restore the database, youll need to restore both the full backup
from Monday and the differential from Wednesday. If it fails on Friday, youll restore the full
backup from Monday and the differential from Thursday.
Be aware that differential backups dont clear the transaction log. If you opt for this method,
you should clear the transaction log manually by backing up the transaction log with the
TRUNCATE_ONLY clause.

PERFORMING FULL WITH TRANSACTION LOG BACKUPS


Another method to consider, regardless of the size of your database, is full/transaction. This is
the best method to keep your transaction logs clean, because its the only type of backup that
purges old transactions from your transaction logs.
This method also makes for a very fast backup process. For example, you can perform a full
backup on Monday and transaction log backups three or four times a day during the week.
This is possible because SQL Server performs online backups, and transaction log backups are
usually small and quick.
Another point to remember is that transaction log backups are the only type that gives you
point-in-time restore capability.
A disadvantage is that the restore process is a little slower than with full-only or full/differential
because there are more backups to restore.

PERFORMING FULL, DIFFERENTIAL, AND TRANSACTION LOG BACKUPS


If you combine all three types of backups, you get the best of all possible worlds. The backup
and restore processes are still relatively fast, and you have the advantage of point-in-time
restore as well. Suppose you perform a full backup on Monday, transaction log backups every

Designing a Data Recovery Solution for a Database | 251

four hours (10:00 a.m., 2:00 p.m., and 6:00 p.m.) throughout the day during the week, and
differential backups every night. If your database crashes at any time during the week, all you
need to restore is the full backup from Monday, the differential backup from the night before,
and the transaction log backups, sometimes called incremental backups, up to the point of
the crash. This approach is fast and simple. However, none of these combinations work well
for a monstrous VLDB; for that, you need a filegroup backup.

PERFORMING FILEGROUP BACKUPS


We discussed the mechanics of the filegroup backup earlier in this Lesson; recall that filegroup backups are designed to back up small chunks of the database at a time, not the whole
database at once. This may come in handy, for example, with an 800 GB database contained
in four files in four separate filegroups. You can perform a full backup once per month and
then back up one filegroup per week during the week. Every day, you perform transaction log
backups to maximize recoverability.

TAKE NOTE

SQL Server can determine which transactions belong to each filegroup. When you restore
the transaction log, SQL Server applies only the transactions that belong to the failed group.

PERFORMING PARTIAL AND PARTIAL DIFFERENTIAL BACKUPS


A partial backup contains all the data in the primary filegroup, every read-write filegroup, and
any optionally specified files. Partial backups are useful when a database contains one or more
read-only filegroups that have been read-only since the last full backup. A partial backup of a
read-only database contains only the primary filegroup. A partial differential backup records
only the data that has changed in the filegroups since the preceding partial backup; such a
partial backup is called the base for the differential. Therefore, partial differential backups are
smaller and faster than partial backups, which facilitates making frequent backups to decrease
your risk of data loss.
Although these types of backups, which are new to SQL Server 2005, are easy to use
and provide more flexibility for backing up under the simple recovery model, they arent
supported by all recovery models.
Now that you understand how to do backups and restores and the types open to you, you
need to consider designing a backup and restore strategy.

Designing a Backup and Restore Strategy: The Process


Performing database backups is one of the most important daily tasks of a database
administrator. SQL Server makes performing backup operations simple. However, you
must remember that the reason for making backups is so you can recover some or all
of your data and databases in the event of a failure. You shouldnt be figuring out how
to do that while your users, boss, and senior management are demanding to know
where the data they need is. You must have a well-rehearsed, fast, and secure means of
performing recovery.

Managing reliable and secure backups across an enterprise can quickly become a complex
task. Therefore, its vital that you develop and test the backup and recovery strategy for your
database server infrastructure.
The recommended approach you should take when defining the backup and restore strategy
is to start with commonly accepted best practices for performing and documenting SQL
Server database backup and restore operations and then adapt them for the specific needs
of your organization.

252 | Lesson 11

When youre designing an organization-wide backup and restore strategy, follow these eight
key steps:
1.
2.
3.
4.
5.
6.
7.
8.

Analyze business requirements.


Categorize databases based on recovery criteria.
Assign a recovery model for each category.
Specify the backups required to support each category.
Specify a backup frequency policy for each category.
Determine the backup security policy for each category.
Document the backup strategy.
Create a backup validation and testing policy.

Heres what to do and how to successfully complete each of these steps.

ANALYZING BUSINESS REQUIREMENTS


Your data recovery strategy should allow you to recover missing data as quickly as possible.
You should also be aware of which databases are the most critical and the order in which they
should be recovered in the event of a major failure. Review some guidelines for determining
realistic database recovery requirements as well as how to justify decisions to management
concerning the strategy for recovering databases.
First, examine each database in your organization and determine the volume of data loss
that the organization can tolerate and the amount of downtime that the organization can
withstand when recovering the lost data.
For each database or set of databases, consult with the stakeholders, and use their input to help
determine the value of the data for the organizations operations. If youre not sure how, you
can use two key elements to help determine the real business requirements for data recovery:
Quantify the acceptable cost of potential data loss. The value of data and the impact
of its loss will vary from database to database. For example, last weeks sales orders are
more important than the maintenance records of the companys fleet of executive cars.
Establish how critical the loss of data would be to the business. Also look to see if loss
of data could result in legal or regulatory problems. You need to review this information
with the stakeholders and confirm with management.
Quantify the cost of data loss in monetary terms whenever possible. Although these are
usually estimates, they can give you a measure by which you can compare databases and
prioritize their recovery needs.
Determine the time and cost of database recovery. For each database, determine the
maximum period of time it can be unavailable before the impact becomes so great that
the organization can no longer function effectively. For example, critical data must be
recoverable immediately, whereas less important data can often be recovered after a delay.
You need this weeks sales data now. The date of the last oil change of the CFOs car can
wait a week if need be. You should also be prepared to quantify the recovery time as a
monetary cost, and you can then compare this against the value of the data.

CATEGORIZING DATABASES BASED ON RECOVERY CRITERIA


To manage and develop a global data-recovery strategy, you need to categorize the importance
of data based on some criteria. This in turn is based on the criticality of data, but you should
also assess the size and rate of change of the data the database contains.

Designing a Data Recovery Solution for a Database | 253

Among the many criteria you can use to categorize databases, you should include at least the
value, volatility, and size of the data, as described here:
Value of the data. Determine the significance of the data held in a database, and
identify those databases that are most critical. Take into consideration the role that
the data plays in the organization, the estimated cost of data loss, and the cost of data
unavailability during recovery. The end result may be categories such as mission-critical
databases, department-critical databases, and noncritical databases. Mission-critical
and department-critical databases may both require backups of transaction logs, but
noncritical databases may only require database backups.
Rate of change of the data. Very active databases may require a different backup-and-restore
strategy than that for relatively inactive databases. If the data changes frequently, youll find
that differential backups will be less useful than regular full and transaction log backups.
Size of the data. The size of a database can impact the options available for performing
backup and restore operations. You can use the size to determine the proper combinations of backups that you should take as part of your backup strategy. A large database
might be backed up only once per week or might have different filegroups backed up
each night. A smaller database might be fully backed up nightly.

Choosing a Recovery Model


Backup and restore operations occur within the context of recovery models. A recovery
model is a database property that controls the basic behavior of backup and restore
operations for the database. For instance, a recovery model controls how transactions are
logged, whether the transaction log requires backing up, and what kinds of restore operations
are available. A new database inherits its recovery model from the model database.
Recovery models simplify recovery planning, simplify backup and recovery procedures, clarify
trade-offs among system operational requirements, and clarify trade-offs among availability
and recovery requirements. Three recovery models are available: simple, full, and bulk logged.

CHOOSING THE SIMPLE RECOVERY MODEL


This model minimally logs most transactions, logging only the information required to ensure
database consistency after a system crash or after restoring a data backup.
As old transactions are committed and the log isnt needed anymore, the log is truncated.
This truncation of the log eliminates backing up and restoring transaction logs. However, this
simplification comes at the expense of potential data loss in the event of a disaster. Without
log backups, the database is recoverable only up to the time of its most recent data backup.
The simple recovery model is generally useful only for test and development databases or
for databases containing mostly read-only data. Simple recovery requires the least administration. Data is recoverable only to the point of the most recent full backup or differential
backup. Transaction logs arent backed up, and minimal transaction log space is used. After
the log space is no longer needed for recovery from possible server failure, the space is reused.
Additionally restoring individual data pages is unsupported.
The simple recovery model is inappropriate for production systems where loss of recent
changes is unacceptable. In such cases, Microsoft recommends using the full recovery model.
When youre using simple recovery, the backup interval should be long enough to keep the
backup overhead from affecting production work, yet short enough to prevent the loss of
significant amounts of data.

CHOOSING THE FULL RECOVERY MODEL


This model fully logs all transactions and retains all the transaction log records until after
theyre backed up. In the Enterprise Edition of SQL Server 2005, the full recovery model
allows a database to be recovered to the point of failure, assuming that the tail of the log has
been backed up after the failure.

254 | Lesson 11

The full recovery model covers the broadest range of failure scenarios and includes both database backups and transaction log backups. It also provides the most flexibility for recovering
databases to an earlier point in time.
If one or more data files are damaged, recovery can restore all committed transactions. Inprocess transactions are rolled back. In Microsoft SQL Server, you can back up the log while
a data or differential backup is running. In the Enterprise Edition of SQL Server, you can also
restore your database without taking all of it offline if your database is in full or bulk-logged
recovery mode.
The full recovery model supports all restore scenarios.
By logging all operations, including bulk operations such as SELECT INTO, CREATE
INDEX, and bulk-loading data, the full recovery model allows you to recover a database to the
point of failure or to an earlier point in time if youre using the Enterprise edition of SQL Server.

CHOOSING THE BULK-LOGGED RECOVERY MODEL


The bulk-logged recovery model is intended as a supplement to the full recovery model.
Before bulk operations such as bulk loading or index creation, you may want to switch a full
model database temporarily to the bulk-logged recovery model. If you do, you should switch
back to the full recovery model immediately afterward.
This model minimally logs most bulk operations, such as index creation and bulk loads, while
fully logging other transactions. Bulk-logged recovery increases performance for bulk operations and is intended to be used as a supplement to the full recovery model. The bulk-logged
recovery model supports all forms of recovery, although with some restrictions.

TAKE NOTE

A database needs to be running in full or bulk-logged recovery mode in order to support


log shipping.

Table 11-1 compares the recovery models.


Table 11-1
Recovery model comparison

R ECOVERY
M ODEL

BENEFITS

D ATA L OSS E XPOSURE

R ECOVER
IN T IME ?

Simple

Permits highperformance bulk


copy operations.
Reclaims log space
to keep space
requirements small.

Changes since the most


recent database or
differential backup must
be redone.

Can recover to the


end of any backup.
Then, changes must
be redone.

Full

No work is lost due


to a lost or damaged
data file. Can recover
to an arbitrary point
in time.

Normally none. If the


log is damaged, changes
since the most recent log
backup must be redone.

Can recover to any


point in time.

Bulk-logged

Permits highperformance bulk


copy operations.
Minimal log space
is used by bulk
operations.

If the log is damaged, or


bulk operations occurred
since the most recent log
backup, changes since
that last backup must be
redone. Otherwise, no
work is lost.

Can recover to the


end of any backup.
Then changes must
be redone.

TO

P OINT

Designing a Data Recovery Solution for a Database | 255

You should assign a recovery model for each category of database that youve identified. A good
rule of thumb for assigning a recovery model is to evaluate the requirements for each category:
Use the full recovery model for the most critical databases. This enables you to recover
data quickly and efficiently as long as you have the appropriate database and transaction
log backups available.
Use the bulk-logged recovery model if the database makes extensive use of bulk
operations to maximize performance.
Use the simple recovery model for databases with less critical recovery and performance
requirements. This lets you minimize the administrative overhead associated with these
databases, at the risk of losing changes made since the last backup was taken.

TAKE NOTE

CERTIFICATION READY?
Expect exam
questions requiring an
understanding of the
different backup options
and how the Recovery
Model setting relates to
your backup options. If
the Recovery Model is set
to Simple, how often are
transaction log backups
going to be done?

When a database is created, it has the same recovery model as the model database. You
can change the recovery model using either T-SQL or Management Studio. The T-SQL
command is ALTER DATABASE <database> SET RECOVERY { FULL | BULK_
LOGGED | SIMPLE }.

SPECIFYING WHAT BACKUPS ARE NEEDED TO SUPPORT EACH CATEGORY


The different categories of databases that youve defined might require taking different
combinations of backups using varying backup schedules to ensure that you keep recovery
times within the business constraints of the organization. Remember that all recovery
models require that you do a full database backup at regular intervals and factor it into your
plan. You can then take combinations of transaction log and differential database backups
according to the requirements of each category.
SPECIFYING THE BACKUP FREQUENCY
While designing a backup frequency strategy, you should do the following:
Specify a periodically repeating backup schedule for each category. For example, mission
critical databases with low activity might require you to take a full database backup
once a week, a differential backup daily, and transaction log backups every 30 minutes.
Specify a media retention or rotation strategy for stored backups that meets business
needs and complies with relevant laws and regulations.

TAKE NOTE

You should always back up the master database before adding a new database or deleting an
existing database as well as prior to making any global configuration changes to SQL Server.

SETTING THE BACKUP SECURITY POLICY


A database backup contains a copy of the information held in the database. Therefore, you
should apply the same level of security to it as to the original.
For each category of database, establish a policy for ensuring backup security. The policy
should cover all parts of the backup process, including creating the backups, temporarily storing backups on site, and transporting and storing backups in an offsite location, a process
known as maintaining a chain of custody. Typically, auditors will audit the chain-of-custody
records for financial and other key data. In the event of an audit, you must also be able to
demonstrate that your data has not been altered or accessed by unauthorized personnel.
You should integrate these key points into your strategy for securing backups:
Ensure that the offsite storage location is physically secure and available whenever
required. The storage facility should be far enough away to not be subject to the same

256 | Lesson 11

type of disaster that might affect your production location but close enough to enable
you to obtain your storage media in a reasonable time.
Use a secure method of delivering backup media to their storage destination.
Protect your backups by using strong passwords.
Back up your encrypted data to files and tape media.

TAKE NOTE

With the introduction of transparent data encryption (TDE) in SQL Server 2008, you now
have the choice of cell-level encryption as in SQL Server 2005, full database-level encryption by using TDE, or the file-level encryption (EFS) options provided by Windows. TDE
is the optimal choice for bulk encryption to meet regulatory compliance or corporate data
security standards. TDE works at the file level, which is similar to two Windows features:
the Encrypting File System (EFS) and BitLocker Drive Encryption, the new volume-level
encryption introduced in Windows Vista, both of which also encrypt data on the hard
drive. TDE does not replace cell-level encryption, EFS, or BitLocker.

DOCUMENTING THE BACKUP STRATEGY


You must carefully document the backup strategies that youve created. The documents should
be made available to any administrators who will be called on to execute it. For each category
of database, include scheduled backup frequency, the database recovery model, references to
external storage locations that keep the backup copies, and any other information required for
performing restore operations. These documents are also of interest to financial auditors who
will assess the adequacy of the procedures for safeguarding a companys financial records.

TAKE NOTE

Dont include passwords in your documentation, because you then make your system only
as secure as the document in which the passwords can be found.

CREATING A BACKUP VALIDATION AND TESTING POLICY


For each category of database, include a validation and verification policy to ensure that your
backup copies are actually useful. These policies must include frequent checks of database
restore procedures (at least once a week) and verifying that you can perform a full server
restore at least once a year.

TAKE NOTE

When backing up and restoring databases, SQL Server provides online page checksum verification and backup and restore checksums. You should consider always using the CHECKSUM
option with the BACKUP command. You can use the BACKUP WITH RESTORE
VERIFYONLY T-SQL command to verify the validity of a backup without performing an
entire restore operation.

An excellent method of assuring that the databases stored in a backup dont contain any allocation, structural, or logical integrity problem is by using T-SQL to run the Database Console
Command (DBCC) DBCC CHECKDB prior to initiating BACKUP DATABASE. Doing so
initiates the performance of the following database operations:
Runs DBCC CHECKALLOC to check the consistency of disk-space allocation
structures for it.
Runs DBCC CHECKTABLE to check the integrity of all the pages and structures that
make up every table or indexed view in the database.
Validates the Service Broker data in the database.

Designing a Data Recovery Solution for a Database | 257

Runs DBCC CHECKCATALOG to check for catalog consistency within the specified
database. Note that the database must be online.
Validates the contents of every indexed view in the database.
This means the DBCC CHECKALLOC, DBCC CHECKTABLE, and DBCC
CHECKCATALOG commands dont have to be run separately from DBCC CHECKDB.

Developing Database Mitigation Plans


THE BOTTOM LINE

Planning how to handle future unpredictable problems can greatly mitigate the negative
effects of these potential problems.
A database disaster recovery plan should be part of a broader disaster recovery plan for
your department or organizations entire IT infrastructure. Developing a good database
disaster recovery plan requires coordination with other departments within the organization,
including management, network administrative staff, and offsite storage operators.
Similarly, a business continuity plan (BCP) deals with the steps needed should a local
disruption or process change occur. As with a disaster recovery plan, the BCP has to be
coordinated with all participants to recover from the loss of key employees (implement
cross-training programs, perhaps), new acquisitions and mergers (start a business process
reengineering effort, perhaps), and/or a constant flood of new Microsoft updates and
products (analyzing the values of SQL Server 2008, perhaps).

CATEGORIZING THE INFORMATION


What you put in your database recovery or maintenance plan depends on the size of your
organization and the complexity of your database server infrastructure. If you have a really
large organization or a complex plan, its a good idea to subdivide the plan and assign the
different parts to modules corresponding to the functionally independent parts of your
organization or the stages of the plan.
Detail is important in any mitigation plan. There are a number of ways you can categorize
the types of information the plan should contain. The following sections show some examples
of the information you should include.

CONTACT LIST
When an interruption occurs, you should have a list of key contacts readily available. These
are the people youll notify about the event and the current status of the system. You can
group contacts based on their role or need for information:
Key managerial personnel in your own department. These contacts are typically
responsible for notifying other members in the organization. In some cases, you may be
the one who has to notify others, so youll need a more comprehensive contact list.
Technicians and other staff responsible for helping recover the system. These
contacts include hardware service technicians, enterprise application service staff, and
possibly spare-part suppliers and transport agencies, for plans that require quickly
obtaining replacement machinery. You should also include backup personnel whom you
can notify in case you cant contact the first level of key individuals.

TAKE NOTE

You must also have a mechanism for providing access to the database administrators
password, should the database administrator (or the designated backup member of staff )
be unavailable. You should treat this password with all the sensitivity accorded to critical
financial data and protect it accordingly.

258 | Lesson 11

Those who need an alternative mode of operation while the problem is resolved.
This includes both personnel and departments affected by the disaster. For example,
if you detect that the database supporting a shipment-tracking system of the freightforwarding department has failed, you should notify the designated contact for that
department, who can in turn notify the customer service staff.
Just as important, make sure all contact information is up to date. Finding out it isnt should
happen before youre in the middle of an actual calamity. Dont get sloppy!

DECISION TREE
A comprehensive recovery plan must include key details for performing recovery in a wide
variety of circumstances. You should document the various circumstances that can arise and
the detailed steps to take to resolve each situation. For a particular type of disaster, you can
often consolidate many different recovery paths into a decision tree. A well-developed and
tested decision tree can help reduce errors and speed up the recovery process. Youll review
how to create a decision tree later in this Lesson.
RECOVERY SUCCESS CRITERIA
Make sure you have a set of criteria that verifies that a particular recovery process is complete
and successful. Dont think youre done after youve restored the databases for which you are
responsiblethats only the first step of the recovery. Make sure peripheral connectivity and
supporting items that applications require to access the databases are also available. The recovery process is finished only after the applications and anything else using the databases are
running as expected.
LOCATION OF BACKUPS, SOFTWARE, HARDWARE, KEYS,
AND ACTIVATION PROCESSES
You should document the locations of backups, software, hardware, serial numbers, softwareactivation keys or activation and configuration processes, and documentation describing how
to rebuild servers and reinstall software. Be sure to include information about software version numbers and any service packs required. If youre responsible for maintaining database
server hardware, you should also include locations of spare parts, such as replacement disks,
memory, and processors.
INFRASTRUCTURE DOCUMENTATION
The database disaster plan can contain server hardware specifications and configuration
information. However, there may also be related documents that it doesnt contain, such as
infrastructure diagrams and application process documents. You should make sure the disaster
plan specifies the location of the most recent versions of these documents.
CREATING A DISASTER RECOVERY DECISION TREE
The key to a good disaster recovery plan is anticipation and contingency planning. You should
assess the different types of disasters that can occur, analyze recovery needs, and provide
detailed steps for recovering from each situation. As you can imagine, the recovery processes
involved can be complex, and the data involved may be irreplaceable as well as critical. Taking
the wrong step during the recovery process can lead to an inordinate amount of lost time.
Adding to this area of concern is the simple fact that when disaster strikes, stress levels go up.
In most environments, stress is managed and activities are kept on the correct path by controlling the decision flow. If youve ever watched a disaster movie, you remember the scene
where the professionals trying to manage the event methodically go through their checklists.
One of the goals of disaster planning is to preempt decisions and ensure that they stay on the
path that will achieve the optimal results by providing a detailed set of steps to be performed
according to the circumstances.

Designing a Data Recovery Solution for a Database | 259

These steps are usually captured in decision trees that can vary based on specific conditions.
At a high level, you should attempt to classify the types of potential database disasters, how
they can be detected, and what impact they can have on the availability of your databases.
Then, for each type of disaster, determine the proper order of recovery and the methods for
verifying that recovery is successful. The following sections provide some guidelines for developing a decision tree for a database disaster recovery plan.

CLASSIFYING DATABASE LOSS SCENARIOS


As a first step, you should classify possible disasters into relevant groups that have similar
recovery paths. Each group can become a scenario that you then associate with a decision tree.
Some scenarios that you can use to help classify database service loss include the following:
Wide-ranging natural or man-made disaster. Examples include an earthquake, a volcanic
eruption, a flood, a war, or a terrorist attack. This type of disaster could affect more than
one location in a single geographic area, and the corresponding recovery plan must not rely
on being able to recover by using locally held backups, software, or hardware.
Loss of a single server or location. This could be the result of a fire or a serious power
outage that affects a single office, building, or immediate locality. The recovery plan
must enable another site to replace the lost server or location quickly. The replacement
site could be located close to the original site.
Data corruption or loss in a single database. This could be the result of a disk failure,
a user error, or even an application error. The recovery plan must enable the missing data
to be recovered quickly, possibly keeping unaffected parts of the database available.
Loss of performance or service. The database may be healthy but inaccessible or running
slowly due to the failure of one or more database services providing access to the database.
The recovery plan must provide steps for identifying and restarting the failing service.
Security failure. The database may be operating healthily but with compromised security. This could be due to malfunctioning or malicious software, a virus attack, or a
configuration error. The recovery plan must provide steps for rectifying security breaches
and protecting data.

PRIORITIZING DATABASE RECOVERY STEPS


In the event of a reinstall effort that impacts several databases, you need to recover the most
important databases first. The following list provides some guidelines you can follow for establishing the relative importance of databases and the order in which you should recover them:

CERTIFICATION READY?
Know the sequence
of the restore process
and when to use
the RECOVERY or
NORECOVERY options.

Identify the most critical databases. Classify the importance of databases in terms of
the losses that will be incurred if the databases are unavailable or the data they contain
becomes insecure. Consider any dependencies between databases. For example, if your
client database depends on a small configuration database, youll most likely have to
restore the configuration database first, even though the information that this database
contains isnt economically important.
Identify critical processes. Related processes can be just as important to restore as the
databases. Often, business applications involve more than one database, even more than
one service, such as SQL Server Integration Services, SQL Server Agent, and Message
Queuing. Some of these processes will be instrumental to the core business functionality
of your applications, whereas others may be less important.
List recovery steps in the correct order based on business needs. Identify the core
business processes that are most important and which must be recovered first. You
should consider recovering the databases that support these processes as the first priority.
In a large organization, you can also identify processes and their corresponding databases
that have secondary or even tertiary importance.
Establish recovery success criteria. Its important that you have a means to verify
the success or failure of each step in the recovery process. Make sure that personnel

260 | Lesson 11

performing each recovery operation have a written procedure and are required to test the
results of the step. The decision tree must include the expected results and the possible
actions to take if these results dont occur. You can document these further actions as
branches in the tree or references to other parts of the decision tree.

DOCUMENTING A RECOVERY DECISION TREE


To be effective, a recovery plan must have sufficient detail and clarity to be used in the event
of data loss. Ensure that the logical flow of the decision trees provides enough detail that the
database administrator can easily understand the steps and execute them under stressful conditions. Because of the elevated risk of error, you dont want a database administrator to have
to make critical decisions during a time of high stress.
Documenting the recovery strategy for each catastrophic scenario. Each type of disaster
calls for a tailored set of recovery steps. The most effective means of documenting these steps
is to use a decision tree based on a flowchart or matrix. Although you can reference one decision tree from another, you should document the recovery strategy for each scenario separately.
The decision tree should specify the category of disaster, the likely symptoms (to enable
the administrator to verify the possible cause if it isnt obvious), and a series of commands
and operations that the administrator should perform. Each command or operation should
include information enabling the administrator to verify the success of the step and specify
what to do as each step succeeds or fails.
If a particular decision tree becomes too complex, consider turning parts of it into subtrees. If
that isnt possible, the scenario itself may be too complex and need to be broken into smaller
subscenarios.
Practicing and recording recovery times for each step. You should document the expected
time required to execute every major step in the recovery plan. You can usually obtain estimates by rehearsing each step in the plan using realistic volumes of data, extrapolating if necessary. With reasonable estimates about how long the major steps will take to perform, you
can reliably predict how long the overall recovery process will take.
Recovery procedures can include replacement of hardware, so you must factor in the time to
replace hardware. If you dont have spare hardware, refer to your hardware support service-level
agreement (SLA), which specifies the maximum downtime that your service contract stipulates.
The most effective method for ensuring that your plan works is to rehearse it periodically and
validate it.

BEST PRACTICES FOR MAINTAINING A RECOVERY PLAN


Perhaps the most important challenge with a database disaster recovery and business continuity
plan is keeping it up to date as a live document and making sure all relevant staff members
understand how to use it:
Disseminate the recovery plan. Ensure that the database disaster recovery plan is
available to everyone who needs it. When multiple copies of the plan are in circulation,
make sure you have one place, such as a source-code control system or network share,
where the original and the latest copy of the plan are kept.
Periodically rehearse the recovery plan. The best way to ensure a smooth recovery process is to properly rehearse the recovery plan. You should make this a high-priority task
and schedule it regularly. Your plan will work much better if staff are thoroughly familiar
with the steps involved.
When rehearsing the plan, you should vary the disaster scenarios to determine how well
your plan works in different situations. Use the rehearsals as an opportunity to revise
and update the plan.
Your rehearsal policy should ensure that you rehearse the plan enough to keep it fresh
and current but not so often that it interferes with normal work requirements.

Designing a Data Recovery Solution for a Database | 261

CERTIFICATION READY?
Disaster Recovery can
take many forms. Read
any exam question
carefully to identify what
situations a Disaster
Recovery Plan should
address.

LAB EXERCISE

Perform Exercise 11.7 in your


lab manual.

Periodically validate the recovery plan. The time to specify the criteria for recovery
success is before a disaster occurs, not after. Steps you should take include the following:
Verify that a database restore was successful. It isnt enough to verify that you can
query the database. You must also verify that appropriate users can log on to the server
and access the correct data.
Verify that the correct (most recent) data was restored. Consider including T-SQL
scripts in the plan that can verify the timeliness of the restored data. This means you
need to verify that the most recent or up-to-date data is present.
Determine and communicate the extent of any data loss. No matter what you do,
you still may lose some data that was entered or changed during a certain period of
time. This can happen, for example, if youre using log shipping and you werent able
to save the active part of the transaction log before the disaster.
Validate recovery-success criteria. Have clear criteria that you can apply that test the
resulting databases for correctness. Make sure the recovery plan supports the required
applications. In the event of a catastrophic failure that extends beyond the database
say, a building fireyoull need to integrate your activities with the organizations
recovery plan.
Validate the contact list, software locations, and hardware locations. Often the
simplest parts of a recovery plan are the elements that you overlook. Apart from the
technical steps involved in recovering the data, you must ensure that the supporting
information is up to date.
Conduct periodic checks to validate that people in the contact list still have the same
phone or extension number and hold the same position in the organization. You must
also ensure that backup hardware and software are located where you expect to find them.
Revise the recovery plan based on periodic rehearsals and validations. Database
recovery plan rehearsals, along with actual disaster recoveries, give you essential feedback about the usefulness and relevance of your recovery processes and documentation.
Incorporating that feedback into the recovery plan document is the most effective way of
keeping your plan up to date and keeping you prepared.
Rehearse and revise the recovery plan when the infrastructure changes. An organizations infrastructure doesnt stand still. When you add new databases, applications,
hardware, or software, or when you upgrade to new releases of software, its important to
verify that the recovery plan still works and to update it as necessary.
In Exercise 11.7, youll develop a disaster recovery decision tree.

S K I L L S U M M A RY
In this Lesson, youve examined the topic of designing a data-recovery solution for
databases. Throughout, youve learned how to go about deciding which steps you should
take to determine what data-recovery technologies to use based on business requirements. Youve learned how to analyze business and assess alternative techniques and
models to save copies of critical business data for archiving and how to plan for data
archival access.
Youve learned how to select from different backup formats and determine the number of
devices to be used for backups; how to specify what data to back up; and the frequency,
techniques, types, and recovery models too employ.
Key to the Lesson has been developing an understanding of how to create recovery plans. You
learned the questions to ask, the methods to utilize, and the procedures to follow. Youve
discovered that in the midst of a seeming catastrophic failure, something as simple as a
well-thought-out decision tree can save you and your organization countless hours of time
and ensure your ability to recover from all but the most egregious of problems.

262 | Lesson 11

For the certification examination:

Know how to perform full, differential, transactional, and filegroup backups. You need to
know the T-SQL syntax and SQL Server Management Studio methods of performing the
various backups. You should also focus on the advantages and disadvantages of the types
of backups.

Know about the various database recovery models. You need to know when to use the
simple, bulk-logged, or full recovery model and the options, advantages, and disadvantages of each.

Know how to restore a database. You need to know the T-SQL syntax and Management
Studio methods for restoring databases.

Know how to recover from a complex crash scenario. You need to know how to recover
from complete crashes of SQL Server, as well as from a crashed or suspect database.

Know how to design a decision tree. You need to know how to design a disaster recovery
tree and what elements to include.

Knowledge Assessment
Case Study
Waves Styles on George
Waves Styles on George is a large fashion and apparel service agency serving as a
wholesaler for approximately 14,000 subagencies and outlets over a broad geographic
area. The company is headquartered in the city of Trevallyn, which also serves as
northern headquarters, with 407 employees. Three branch offices are located in
Devonport (eastern operations), Ravenwood (western), and Meriwether (southern).

Planned Changes
The company wants a complete disaster recovery plan overhaul and a reevaluation of its
backup strategy.

Existing Data Environment


The company currently has six databases: Customer, Contractor, Accounting, Orders,
HumanResources, and Parts. The Contractor database is not written to very often.

Existing Infrastructure
The company has three existing SQL Server 2005 computers running with default
instances, which contain the following databases:
WGServer1: Accounting and HumanResources
WGServer2: Customer
WGServer3: Contractor, Orders, and Parts

Business Requirements
Users need to be able to access their data at any time of the day or night. The Customer
database must not fail when a single hard disk on the server fails. The Customer database is very volatile, with numerous changes daily during business hours of 09:00 to
18:00. Most of the changes occur during the afternoon hours. Very few changes are
made during nonbusiness hours. Business requirements allow for up to one hour of data

Designing a Data Recovery Solution for a Database | 263

loss. Requirements state that no more than six backups should be required to recover
any given database. Following tests of different backup scenarios, it was determined that
full backups were not to be done during business hours and differential backups should
be performed only once during business hours.

Technical Requirements
The existing named instance configuration cant be changed because its mandated by
the disaster recovery plan.
The recovery model for the Orders database must be full recovery.

Multiple Choice
Circle the letter or letters that correspond to the best answer or answers.
Use the information in the previous case study to answer the following questions:
1. You are asked to design the backup schedule for the Customer database. Fill in the
blanks of the following table using these backup types (each selection may be used once,
more than once, or not at all): full, differential, copy, transaction log, incremental.
S CHEDULE

BACKUP TYPE

Once per day at 23:00


Twice per day at 12:00 and 19:00
Hourly, during business hours

2. What does the NORECOVERY switch do?


a. There is no such switch.
b. It cleans out (truncates) the log without making a backup.
c. It makes a backup of the log without cleaning it.
d. It loads a backup of a database but leaves the database offline so you can continue
restoring transaction logs.
3. You need to enable more frequent backups of only the volatile data that is stored in the
Orders database. What should you do?
a. Add database log backups of the Orders database.
b. Add full database backups of the Orders database.
c. Add differential backups of the Orders database.
d. Add differential backups created by the Windows Backup Utility.
4. When do you need to use the REPLACE switch?
a. There is no such switch.
b. When you are restoring into a database with data.
c. When you are restoring into a database that is marked read only.
d. When the database you are restoring into has a different name than the originally
backed-up database.
5. Which program would you use to create backup jobs?
a. Transfer Manager
b. Backup Manager
c. Security Manager
d. Management Studio

264 | Lesson 11

6. You need to design a method for testing and verifying that future backups of
WGServer2 can be restored and that the databases stored in the backups do not
contain any allocation, structural, or logical integrity problems. Which two actions
should you perform?
a. Restore the backups to another SQL Server computer.
b. Run DBCC CHECKDB on the original databases.
c. Run DBCC CHECKDB on the restored backups.
d. Use the RESTORE VERIFYONLY command.
7. You need to be able to restore the Parts database at any given time, but it is a very large
database with many inserts and updates; a full backup takes nine hours. You implement
the following strategy: You schedule a full backup every week, with differential backups
every night. You set the recovery option to simple to keep the log small, and you schedule transaction log backups every hour. Will this solution work?
a. This solution works very well.
b. This solution will not work because you cannot combine differential backups with
transaction log backups.
c. This solution will not work because you cannot schedule transaction log backups
with full database backups.
d. This solution will not work because you cannot schedule transaction log backups
when you have selected the simple recovery model for a database.
8. You have three filegroups (FilesA, FilesB, FilesC) in the HumanResources database. You
are rotating your filegroup backups so that each filegroup is backed up every third night.
You are also doing transaction log backups. The files in FilesB get corrupted. Which
of these steps would you take to restore the files? (Choose all that apply, and list your
answers in the order the steps should be taken.)
a. Restore the transaction log files that were created after the FilesB backup.
b. Restore the FilesB filegroup.
c. Back up the log with the NO_TRUNCATE switch.
d. Restore the entire HumanResources database.
9. As discussed previously, Waves Styles on George has a database called Customers. You
are performing full backups every night at 23:00 and differential backups at 12:00 and
19:00. On Tuesday at 17:45, a user deletes all the rows of the table. You discover the
error at 21:00. What is the correct way to restore the Customers database?
a. Restore the full backup from Monday night. Restore the 19:00 differential backup
until 17:44.
b. Restore the full backup from Monday night. Restore the differential from Tuesday
until 12:00.
c. Restore the full backup from Monday night. Restore the differential from 12:00.
d. Restore the full backup from Monday night. Restore the differential from 19:00.
10. You need to configure the Orders database to meet the technical requirements. Which
T-SQL statement should you use?
a. ALTER DATABASE Orders SET RECOVERY SIMPLE
b. DBCC CONFIGDB BACKUP TYPE Orders SIMPLE
c. ALTER DATABASE Orders SET RECOVERY FULL
d. ALTER DATABASE Orders SET RECOVERY MODE TO FULL

Designing a DataArchiving Solution

L ESSON

12

L E S S O N S K I L L M AT R I X
TECHNOLOGY SKILL

EXAM OBJECTIVE

Select archiving techniques based on business requirements.

Foundational

Gather requirements that affect archiving.

Foundational

Ascertain data movement requirements for archiving.

Foundational

Design the format and media for archival data.

Foundational

Specify what data to archive.

Foundational

Specify the level of granularity of an archive.

Foundational

Specify how long to keep the archives.

Foundational

Plan for data archival and access.

Foundational

Specify the destination for archival data.

Foundational

Specify the frequency of archiving.

Foundational

Decide if replication is appropriate.

Foundational

Establish how to access archived data.

Foundational

Design the topology of replication for archiving data.

Foundational

Specify the publications and articles to be published.

Foundational

Specify the distributor of the publication.

Foundational

Specify the subscriber of the publication.

Foundational

Design the type of replication for archiving data.

Foundational

KEY TERMS
archive: A repository containing
historical records that are
intended for long-term
preservation.
format: The organization of data
stored on some form of media.
This could be a SQL Server
backup format, a simple TXT file
containing comma-separated
value (CSV) data, or some other
form of organization of the data.

media: The physical item used to


store data. Tapes are a common
form of media as are individual
optical storage items such as
CDs and DVDs. The type of media
used must match the physical
hardware device used for reading
from and writing to the media. As
an example, an AIT tape cartridge
(the media) must only be used in
an AIT type tape drive.

replication: A set of technologies


for copying and distributing data
and database objects from one
database to another and then
synchronizing between databases
to maintain consistency.
topology: The manner in which
the components of a system
are arranged or interrelated,
including adjacency and
connectivity.

265

266 | Lesson 12

One of the results of a well-designed and expertly crafted database is that it fills with
data. Over time, the sheer amount of data may begin to overwhelm the database system.
At the same time, not all of the data needs to be instantly available. Trying to keep it all
where it can be obtained immediately may serve the opposite goal and begin to degrade
your database, making accessing records more difficult than necessary.
Youve probably seen a similar phenomenon in your day-to-day activities. You may have
started by keeping all your financial records, receipts, bank statements, cancelled checks, and
the like readily available in a file drawer or box. After a few years, the volume of material
probably convinced you to toss out some of the material you no longer need. The remainder
of the older stuffsuch as five-year-old tax returnsyou may have removed from the current
file drawer, inventoried, and placed in a cardboard box in your attic for future reference.
In doing so, you went through all the steps that describe a data-archiving strategy and
solution. And you thought this Lesson was going to be difficult!
In this Lesson, youll first review the whys and wherefores of data archiving and make sure
youre clear on why such a system needs to be part of any database infrastructure design.
Then, youll be introduced to the fictional company Yanni HealthCare Network, which youll
use throughout this Lesson to illustrate how to visualize data-archiving concepts. Finally,
youll go through the basic process of designing a data-archiving solution, including determining business and regulatory requirements and what data will be archived, selecting a storage
format, developing a data-movement strategy, and designing a replication topology if replication is used.

Deciding to Archive Data?


The principal reason to archive data is performance.
THE BOTTOM LINE

Storing historical datadata that doesnt need to be immediately accessibleonline reduces


the performance of a database server. Conversely, archiving yesterdays data improves query
performance for todays data, decreases the use of disk space, and reduces the maintenance
window:
Improved query performance. If a production database contains historical data that
is never or rarely used, queries take longer to execute because they have to scan the
historical data. Moving the historical data from the production database to another server makes queries more efficient, and you can still query the archive server if necessary.
Decreased disk space use. Because historical data frequently uses more disk space than
the active data, one obvious advantage to archiving it is that you can free up disk space
for other purposes. Think back to the earlier example of your financial records: By
archiving your records off-site (in this case, in a cardboard box), you opened space in
your file drawer.
Financially, archiving can save you money. For example, if the historical data is stored on
an expensive disk system, such as a Storage Area Network (SAN), archiving the data will
significantly reduce your storage costs.
Reduced maintenance time. Removing historical data makes basic maintenance tasks,
such as backup, defragmentation, and reindexing on tables more efficient, and reduces
the time required for these operations. Conversely, archiving reduces the time required
for database backup and restores operations.
Reduced costs. Removing historical data may solve performance problems on some
systems, especially where hardware is barely adequate.

Designing a Data-Archiving Solution | 267

Although there are many advantages to archiving historical data, archiving isnt a cure-all for
whatever ails your system. Archiving cant provide a solution for issues such as poorly chosen
indexes, design flaws, improper file placement, and inadequate maintenance and hardware.
Typically, data archiving improves the performance of database servers. However, the results
may sometimes be disappointing. For example, if the amount of data to be archived is relatively small, it may not be worthwhile to archive it, and archiving it may not produce the
desired results in terms of improved performance.
Although archiving data can be a complex process, designing an archiving solution is relatively
simple if you approach the problem in a systematic manner. At a minimum, a data-archive
plan defines both the scope of archiving and the architecture of the archived data. Once you
have the process down, youll be able to apply the same steps and procedures to nearly any
situation and come up with a plan that meets your needs.
In creating a data-archive plan, you should take the following steps:
1. Determine business and regulatory requirements.
2. Determine what data will be archived.
3. Select a storage format and media type.
4. Develop a data-movement strategy.
5. Design a topology if replication is used.

Determining Business and Regulatory Requirements


As youve seen throughout this textbook, your most important initial consideration in
any design aspect should be to determine any business and regulatory requirements that
will impact your design.
The amount of online data that is required by users depends on an organizations business.
For example, enterprises in the health care industry have different requirements than organizations in the banking industry. To identify the online data requirements of an organization,
consult with key stakeholders. By working with the stakeholders, you can identify what data
needs to be available and what doesnt need to be accessible immediately. You can then make
plans to move the latter to backup copy devices or less expensive alternatives.

Case Study: Presenting a Data-Archiving Scenario


The Yanni HealthCare Network serves a total current population of 500,000 patients.
During its 30-year history, it has registered approximately two million persons for whom
it has provided care. Currently, all medical laboratory diagnostic test results have been
digitized and are maintained in an On-Line Transaction Processing (OLTP) database.
The laboratory results database has been growing at a rate of 1.5 percent per month
and contains a large amount of data that is almost never updated and rarely queried.
This historical data has slowed server-maintenance operations such as reindexing and
defragmentation. Because of the large size of the database, running queries is becoming difficult. The final straw was reached when the chief of medicine requested a simple
cross-tab query on three years worth of data: The query began on a Friday and wasnt
completed until Tuesday morning because of the volume of records.
Governmental health regulations require that all laboratory test results be maintained
for 25 years. Clinical caregivers state that they require immediate access to the past five
years worth of data for queries and reports. Archived data must be available by the next
day following the submission of the request. All data, as is the case with any medical
data, must be secure and confidentiality maintained. There must be at least two copies
of the archived data stored in different locales. In addition, risk-management personnel,

268 | Lesson 12

accountants, and the research staff are insistent that all information be both retained
online and archived. Finally, the organization has sufficient budget to purchase one or
more new servers for storing the archived data.
Throughout this Lesson, youll use this scenario as a tool to show how to design a dataarchiving solution.

Business regulations may stipulate the length of time that data must be accessible online. For
example, in many countries, banks are required by law to maintain certain customer data
online for a specific number of years. Health care providers are also subject to laws and regulations related to maintaining confidential information, often for long periods of time. Other
businesses, such as manufacturing and retail sales organizations and government agencies, may
also need to comply with certain regulations. These regulations may influence data-archiving
requirements in varied ways and lead to interesting and creative archive solutions. You must
consider the impact of regulatory requirements when determining what data can be stored
offline and how quickly it must become available online when requested.
Another consideration is how much of the data you need. Users may not need detailed data
after a certain period. In such cases, you can maintain summary tables online and archive the
detailed data to offline storage.
Applying these concepts to the Yanni HealthCare Network scenario, you can see that combined
business and regulatory requirements require you to keep all the laboratory data in some
manner. The level of detail required isnt clear, and this is the sort of question you should
discuss with the key stakeholders and management at the hospital. In these discussions, youll
be expected to listen and provide solutions. Youll also be expected to explain the impact of
these requirements on the database system and any performance issues you foresee.
You need to consider one other type of requirement in your design: the accessibility of data.
Some data needs to remain online and immediately available. Other data can be removed
from immediate access but may need to be readily accessible in a short period or long period
of time. In reviewing and creating your archival data plan, you need to consider these factors
as well as the acceptable turnaround time. Once youve done that, you can stratify the data
based on relevant time frames. Accessibility requirements and turnaround time also determine
the storage formats and media you can use for archiving data.
Finally, you need to accurately assess the requirements against what the stakeholders want and
what the stakeholders need. Users often demand that historical data be maintained online
because they dont want to risk losing access to it. However, after the data is archived, they
rarely access the data. This perception of risk to accessibility can lead key stakeholders to
be reluctant to move data offline. One of your tasks as a database administrator must be to
accurately scope what data needs to be archived and to determine the impact on accessibility.
When proposing a data-archive strategy, you should communicate the benefits of archiving
the data and share a plan for ensuring the security and accessibility of the archived data.
Based on the requirements spelled out in the Yanni HealthCare Network scenario, you
need to keep all existing and future data. The users want to be able to access five years of
data online. Governmental regulations require maintenance of data for 25 years, and your
management team wants to retain all data forever.
You review the data and determine that maintaining five years of data is an expensive and
resource-intensive use of assets. You do a study that indicates only 1 percent of all queries
specify data greater than three years old. You bring this information to the attention of the
stakeholders; and after examining the balance between cost, performance, and need, all sides
agree that data three to five years old can be maintained elsewhere, provided the turnaround
time is less than three hours. The result is a structured data-archive plan for accessibility based
on age of data, as shown in Table 12-1.
As youll see later in this Lesson, accessibility requirements also influence the structure of
archival data and affect the planning for data-storage formats and media.

Designing a Data-Archiving Solution | 269


Table 12-1
Accessibility requirements

CERTIFICATION READY?
Imagine various business
scenarios and how to
meet unique needs. For
example, you have four
distant warehouses and
corporate headquarters
needs to query the
archived data once
a week. Change the
scenario to storing
the data at corporate
headquarters but analysts
only query it once a
year. Keep tweaking the
parameters and redesign
appropriately until you're
comfortable with any
situation.

A CCESSIBILITY

REQUIREMENT

A GE

OF

D ATA

Immediate access

Data less than 3 years old

3-hour access

Data more than 3 years old but less than 6 years old

24-hour access

Data between 6 and 25 years old

48-hour access

Data more than 25 years old

Determining What Data Will Be Archived


Now that youve established the relevant business and regulatory requirements that need to be
considered, as well as defined the accessibility requirements, you can turn your attention to
determining what data can be archived. This is also known as identifying the historical data.
As you develop your data-archival plan, clearly identify the data that has been selected for
archiving and justifications for that selection. You should also describe the criteria youve used
to select the data and show how the selection derives from business, regulatory, and accessibility requirements as well as any other factors that may influence your design.
Several basic tasks are involved in deciding what should be archived:
Identify historical data. In the Yanni HealthCare Network scenario, the definition of
what is historical was determined by the business and regulatory requirements. All data
must be maintained, but data more than three years old doesnt need to be maintained
online for updates and queries. That data can be archived.
But what if there arent any specific guidelines, or you think it may be possible to tweak
the existing requirements to meet them while changing the expected archiving paradigm?
To do that, you should analyze tables that belong to the core application and identify
data that is never updated and rarely queried. You should then present this assessment
as the justification for your design, and delineate between online and archived data. In
the previous section, you used this approach to convince management to reduce online
data from the past five years to the past three years. A corollary is to establish a sliding
window in time that delimits online data from archival data. In our example, you can
archive data that is more than three years old.
Another way to determine whether data is a good candidate for archiving is to use tools
such as SQL Trace and SQL Profiler to determine whether users have accessed a table or
a set of rows in a table during a given period.
Determine whether there is a savings in disk-space cost. You should archive data only
if it is beneficial to do so. If a sizable amount of disk space will be recovered by archiving
data, making the savings in disk-space cost significant, then data archiving is justified.
Conversely, it may not be worthwhile to archive data that uses a small amount of disk
space. When estimating the savings in disk-space cost, remember that archiving data results
in smaller backup files, further reducing the use of disk space and other storage media. In
the Yanni HealthCare Network scenario, its clearly beneficial to move 25 years worth of
data out of the database as initially proposed. The suggestion to move data older than three
years rather than five years to archive should be made only if there will be a genuine savings; otherwise, accepting the initial requirements would be a reasonable course of action.
Determine the performance benefits. As you learned earlier, archiving data helps
reduce disk, memory, and CPU usage. You can use the System Monitor tool to determine how the performance of system resources will improve with archiving. You should
also consider the impact of archiving data on maintenance tasks, such as reindexing,
defragmentation, and backup.
Establish the archiving interval. You can determine the archiving interval based on
your business needs and the nature of data. For example, if you need to maintain the last

270 | Lesson 12

CERTIFICATION READY?
Imagine various business
scenarios and answer
the questions: How long
must data be stored?
What table attributes
must be maintained? Is it
better to denormalize the
data? What answers will
users seek? Should you
create a cube instead?

two years worth of data online, you can archive data at either monthly or weekly intervals. If you archive monthly, then 25 months of data (2 years plus 1 month) would exist
online just prior to the monthly archival process.

STRUCTURING ARCHIVAL DATA


To ensure the smooth movement of data from a production database to the archival media,
you need to structure the data properly. SQL Server supports the use of partitioned, normalized, denormalized, and summary tables to structure archival data. Youll examine the various
ways in which you can structure archival data. After that, youll cover the factors you should
consider when choosing the structure of archival data.
You can structure archival data by using the following types of tables:

REF

For more information


about the tablepartitioning features in
SQL Server, see the
article on the MSDN
Web site at http://msdn
.microsoft.com/en-us/
library/ms345146.aspx.

Partitioned tables. You can use fully partitioned tables to structure archival data.
Partitioned tables were introduced with SQL Server 2005 and are more effective than
the older union-partitioned views for managing very large tables and indexes. Partitioned
tables are also easier to maintain than union-partitioned views. It can be difficult to find an
appropriate check constraint on which the partition can be based with union-partitioned
views, and queries across the views dont always select the appropriate partition correctly.
You can place partitioned tables and their indexes in separate filegroups. In addition, you
can automatically repartition data among tables. You can also switch tables in and out
of a partition. After a table is switched out of a partition, you can move the table and its
index to the archival destination.
Normalized tables. Archiving related data together keeps the historical context of the data.
Normalized tables can be used to structure archival data and maintain historical content. If
you use normalized tables, a key consideration is to make sure the tables can accommodate
changes in lookup values or related tables. One way to accomplish this is to add date-range
validity to the normalized tables. You can then specify the date ranges for valid lookup
values. Note that archiving relational data often requires the archiving of additional data
involved with foreign keys. Normalized data requires these key relationships.
Denormalized tables. If youre unable to archive all related data together, you can use
denormalized tables to preserve the historical context of the data. Denormalized tables
store actual values rather than references to the current data. Therefore, these tables are
most useful for optimizing queries that involve complex joins.
In addition to denormalized tables, you can use indexed views to denormalize data.
Because denormalized tables persist data physically, you can retrieve data from them
more quickly than from indexed views. However, denormalized tables require additional
disk space. Denormalized tables also must be periodically rebuilt and arent automatically
updated like indexed views.
Summary tables. You may not need to maintain detailed data after a certain period. In
such cases, you can keep summary tables online and archive the detailed data and store
it offline. For example, you may have a database that stores monthly sales revenue by
product. It may be possible to remove and archive the detailed data while maintaining
only the monthly summaries.

CHOOSING WHICH STRUCTURE TO USE


When choosing the structure of archival data, consider the following factors:
Data accessibility. If a new application will be developed to access the archived data,
denormalized tables are a good choice. Alternatively, you can maintain only some of the
detailed information and discard the remaining data. If the current application must be
able to use the same mechanism to access both online and archived data, the two types
of data must be structured identically.
In addition, accessibility requirements influence the structure of archival data because
they determine the following:
The constraints that limit the ability to update archived data.
The amount of space that can be used for storing archived data.

Designing a Data-Archiving Solution | 271

The time frame for accepting updates to archived data. This, in turn, may depend on
regulatory requirements.
The rules for archiving.
Storage costs. When developing a structure for archival data, you must be mindful of the
hardware, media, and often, software costs for storing the data. As a rule of thumb, storing archival data online has been more expensive than storing the data offline. This may be
changing. If you choose to use denormalized tables for your archived data, additional disk
space will be needed with concomitant increases in storage costs. You can reduce hardware
costs by keeping only the summary data online and storing detailed data offline. The hardware
used for creating offline storage media then becomes a factor along with the per unit cost of the
offline media as well as any special software for writing to and reading from the media.
CERTIFICATION READY?
Imagine various business
scenarios and list the
considerations a plan
must include. Is there
a networking element?
When are data no longer
needed?

Offline storage can involve hidden costs such as transportation or retrieval costs charged by
offsite couriers. In addition, you need to ensure that the security of the data that is stored
offline isnt compromised. Its also possible that in the event of a major disaster, access to
archived data may not be as smooth as hoped. In the aftermath of the 9/11 attacks on New York
and Washington, DC, as well as Hurricane Katrina in New Orleans and the Gulf of Mexico, offsite
data-storage centers were overwhelmed by demands for archival and replicated current data.
If the structure or format of archival data differs from the source online data, additional
expenses may be incurred for developing applications and reports to access the archived data.

Selecting a Storage Media Type and Format


THE BOTTOM LINE

Backups and archiving of historical data require the selection and use of storage media
and storage format types. This section provides information on how to make appropriate selections of these topics.
Storage format refers to the logical structure of data on a type of physical media that is used
to store the archived data.
Examples of storage format could be SQL backup files, CSV data extracts, or TXT files from
BCP. The physical media can be disk, tape, optical storage such as CDs and DVDs, or other less
frequently used types of media. Depending on your requirements, you can store archived data on
tape or on low-cost magnetic or optical media. With disk costs falling, there is an increasing movement toward storage on disk. You can also store archived data in a separate database on the server
that hosts the production database. This would be an example of storing archival data in SQL
Server database format. Alternatively, you can use a dedicated server to store archived data. The
choices for storage media and format are influenced by the structure and accessibility requirements
of the archival data. Each type of media and format has different characteristics with respect to
cost, accessibility, shelf life, reliability/durability, security, and changing technology:
Cost. If you need to archive a large volume of data, the cost of storage can be significant.
As a rule of thumb, for large amounts of data, tapes are cheaper than disks or optical
media; but disk prices keep falling, and in some instances disks may make better sense.
Accessibility. If the archived data must be quickly accessible, you should normally
use some form of online storage. Traditional offline media such as tape or CDs can be
accessed quickly if you invest in some form of robotic media library. This could be a
considerable extra-hardware expense.
Shelf life. Shelf life refers to the lifetime of the storage media. Many types of digital storage media, such as DVDs or LTO tapes, are relatively new, and their shelf life may not be
easily determined. If you opt to keep the archive media in your control, make sure you
follow vendor recommendations for storing your archived data in proper environmental
conditions (for example, store tapes in a cool, dry place). If you use an external vendor for
storage, check from time to time to make sure the data is held properly. Also consider the
shelf life of any required hardware. If data was archived to reel-to-reel tape 20 years ago,
can you get a working reel-to-reel 9-track tape drive that can read the archival tapes?

272 | Lesson 12

Reliability and durability. You must take into account the relative durability and
reliability of the media used. Some types of media are more sensitive to handling and usage
than others and may degrade faster. For example, tapes tend to deteriorate more easily than
disks or optical media. Its worth assessing the differences in order to ensure that archived
data is readable from archival media. If the ability to retrieve data from old media is critically important, you should have multiple redundant copies of the data and periodically
perform read testing to ensure that the media can be read. Having a redundant copy then
allows you to make a fresh copy should it be determined that any one copy is unreadable.
Security. There are many ways to provide for encryption. However, the administrative
overhead and third-party products involved vary. For example, there are third-party
products for encrypting data on both tapes and disks. In addition to encrypting archival
data, you should ensure that the data is stored in a secured location.
Changing technology. Once you have your plan in place, you need to be ready to adapt
it to changes in technology, because there may be shifts in relative costs of items. For
example, the authors of this book have gone from storage of archive material on floppy
disk to hard drive or zip drive to CD to DVD or SANs. Its likely that such technological
innovation will continue unabated, and youll need to be prepared to revise your plans.
Now, apply the previous discussion to the Yanni HealthCare Network scenario and design a
table that shows the archival design as well as summarizes the storage formats and media (see
Table 12-2). Note that the business and regulatory requirements have led you to conclude
that tape isnt the best approach for data greater than six years old. Although tape may be an
appropriate option in most cases, the need to conduct data studies using the historical data
leads to a trade-off between the more expensive but more manageable archive server with data
on disk, as opposed to tape.
Table 12-2
Data-archiving strategy:
Storage format/media
accessibility

R EQUIREMENT

A GE

Immediate access

Data less than 3 years old

OLTP server (the production server)

3-hour access

Data more than 3 years


old, but less than 6
years old

For access to archived data within


3 hours, you can use an archive
server. The storage capacity of the archive
server should typically be the same as
or higher than that of the main server.
Archival servers usually need fewer
system resources and may have lower
processing capabilities than the main
server hosting the production database.

24-hour access

Data between 6 and


25 years old

For access to archived data within


24 hours, you can use storage media such
as tapes. However in this case, because you
want to be able to access this data to meet
other business and regulatory requirements,
you must use an archive server.

48-hour access

Data more than


25 years old

For access within 48 hours to


archived data that is likely to only be
rarely accessed, you can use storage
media such as tapes. Although tapes
are slower than hard disks and optical
media, theyre relatively inexpensive and
potentially less reliable. Be aware that
older tapes and older tape drives can be
very problematic. Can you read any tape
that is 25 years old and older today?

OF

D ATA

S TORAGE F ORMAT /M EDIA

Designing a Data-Archiving Solution | 273

CERTIFICATION READY?
Imagine various business
scenarios and justify a
storage solution best
meeting the trade-offs
between cost, longevity,
access speed, reliability,
security, and building
requirements. Do you
need a remote hot site?

Developing a Data-Movement Strategy


A data-movement strategy describes how archival data is moved from the server that
hosts the production database to the destination storage format. When developing the
strategy, you should consider the frequency of data movement and its effect on network
traffic. If data is to be moved to an archive server, you must determine whether to use
direct or indirect data transfer based on the type of connection between the production
and the archive servers. Finally, you must consider the security risks involved in moving
data and define measures to safeguard the data during movement:
Specify the frequency of data movement. You can move archival data from the server
that hosts the production database to the destination storage format periodically or on
an ad hoc basis. Best practice is to move data on a specified schedule rather than ad hoc
because then it can be easily automated and tested, resulting in fewer errors.
Minimize the impact of data movement on production activities. Schedule data
movement as you would other routine maintenance activities, for times when the user
load is low. Sometimes its better to have a schedule of moving small datasets frequently
rather than a large dataset infrequently.
Also consider how data movement will affect reporting. For example, suppose our
fictional hospital ran its summary of laboratory reports for billing purposes on a
monthly basis. You would need to schedule data-movement activities so they didnt affect
the generation of reports.
Also make sure archival data is moved from the production server to the destination
storage location in an optimal manner. For example, you can decide to transfer archival
data to another server with good disk performance, and then move it to an archive server,
rather than moving it directly from the production server. This technique minimizes the
amount of time and resources devoted to archiving by the production server.
Choose direct or indirect transfer. If data is to be moved to an archive server, the type
of connection between the production and archive servers can affect the way you transfer the data. When there is a direct connection, tools such as SQL Server Integration
Services (SSIS) and replication can be used for directly transferring data. You can also
use queries to transfer data between linked servers. If the connection between the two
servers isnt direct and doesnt allow you to use SQL Server tools, you need to devise
a different method of moving the data to the archive server. For indirect data transfer,
you can use tools such as SSIS and the Bulk Copy Program (BCP) and create extract
files that can then be copied over a network connection using any one of many possible
tools. As you saw in the previous Lesson, you can also use the backup command package
with SQL Server.
Ensure the security of data during movement. All storage formats and network
connections involved in data movement must be secure. Data stored on portable media,
such as a tape or flash drive, is more vulnerable to theft and security attacks than data
stored on an archive server in a secure data center. You should always use encrypted data
transfer and encrypted files when dealing with sensitive data, which is virtually all data
in an enterprise.
Prescribe steps for data verification. You dont want to move archival data to the
destination storage location, delete it from the source, and then discover that it wasnt
successfully copied. The data-movement strategy must include steps for data verification.
For example, if tapes are used to store archival data, you must retrieve the data to verify
that it has been correctly copied. Similarly, you can verify data that has been copied to
disks or optical media by viewing the data.

274 | Lesson 12

Designing a Replication Topology


THE BOTTOM LINE

All forms of data replication involve multiple database servers. A fundamental step in developing a replication process is to design the topology of the involved servers and how the data
will be replicated across this topology.
The sole purpose of replication is to copy data between servers. Several good reasons exist for
having a system that does this:
If your company has multiple locations, you may need to move the data closer to the
people who are using it.
If multiple people want to work on the same data at the same time, replication is a good
way of giving them that access.
Replication can separate the functions of reading from writing data. This is especially
true in OLTP environments where reading data can place a load on the system.
Some sites may have different methods and rules for handling data (perhaps the site is a
sister or child company). Replication can be used to give these sites the freedom of setting their own rules for dealing with data.
Mobile sales users can install SQL Server on a laptop, where they may keep a copy of
an inventory database. These users can keep their local copy of the database current by
connecting to the network and replicating.
Youll probably be able to come up with even more reasons to use replication in your enterprise.
Another application of replication involves archival data. At first, this may seem counterintuitive. Replication is normally associated with synchronizing live (active) databases. Archival
data could be viewed as historical and static and not a good candidate for replication, but that
isnt always the case.
There may be business reasons or regulatory requirements that mandate the existence of more
than one copy of the same archival data. Securities and Exchange Commission regulations
17a-3 and 17a-4, for example, stipulate that an exact duplicate of archived electronic records
must be stored separately from the original. Because archival data must be updated with
newly archived records, even if infrequently, replication is an easy way of ensuring that the
data remains synchronized between different locales and copies of archived databases.
Replication also addresses both regulatory compliance and disaster-recovery needs. A key
question youll have to assess is whether replication of the archival data is correct for your situation.
You can also use replication to provide higher availability of archival data. With replication,
if one archival site isnt available for any reason, then another site can be used to service the
request. When a dataset is archived, several options are available for replicating this data for
high availability, as youll see shortly.
Replication is most appropriate for distributing copies of data among databases. Its also the tool
of choice for supporting multisite updates and mobile users who are occasionally connected.
Although replication provides limited support for data transformation, its best suited for circumstances where the structure of the data on the publisher and on the subscribers is the same.

UNDERSTANDING AND ADMINISTERING REPLICATION TOPOLOGIES


A replication topology defines the relationship between servers and copies of data as well as
clarifies the logic that determines how data flows between servers. In general, the replication
topology you design depends on many factors, including the following:
Whether replicated data needs to be updated, and by whom
Your data distribution needs regarding consistency, autonomy, and latency
The replication environment, including business users, technical infrastructure, network
and security, and data characteristics

Designing a Data-Archiving Solution | 275


Figure 12-1
SQL Server can publish, distribute, or subscribe to publications in replication

Publication
Article
Article
Article

Publisher
Contains original
copy of data

Distributor
Collects changes
from publishers

Subscriber
Receives a
copy of data

The types of replication and replication options


The replication topologies and how they align with the types of replication

UNDERSTANDING THE PUBLISHER/SUBSCRIBER METAPHOR


Microsoft uses the publisher/subscriber metaphor to make replication easier to understand
and implement. It works a lot like a newspaper or magazine publisher. The newspaper has
information that people around the city want to read; the newspaper publishes this data and
has news carriers distribute it to the people who have subscribed. As shown in Figure 12-1,
SQL Server replication works much the same in that it too has a publisher, distributor, and
subscriber:

TAKE NOTE

A SQL Server can be


any combination of
these three roles.

CERTIFICATION READY?
Imagine various
business scenarios.
What replication
strategy works best
for each? When can
other technologies
(mirroring, copying
to another location,
distributed transactions,
backup and restore) be
better solutions than
replication?

Publisher. In SQL Server terminology, a publisher is a server with the original copy of
the data that others needmuch like the newspaper publisher has the original data that
needs to be printed and distributed. The data is organized into publications, which consist of smaller datasets called articles.
Distributor. Newspapers need carriers to distribute the newspaper to the people who
have subscribed, and SQL Server needs special servers called distributors to store and
forward initial snapshots of publications and distribute them to subscribers. Distributors
can also store transactions that need to be sent to subscribers.
Subscriber. A subscriber is a server with a database that receives copies of publications
from a publisher. A subscriber is akin to the people who need to read the news and
therefore subscribe to the newspaper.
The analogy goes even further: All the information isnt lumped together in a giant scroll and
dropped on the doorstepits broken into publications and articles so that its easier to find
the information you want to read. SQL Server replication follows suit:
Article. An article is data from a table that needs to be replicated. You probably dont need
to replicate all the data from the table, so you dont have to. Articles can be horizontally
partitioned, which means not all records in the table are published; and they can be
vertically partitioned, which means that not all columns need to be published.
Publication. A publication is a collection of articles and is the basis for subscriptions. A
subscription can consist of a single article or multiple articles, but you must subscribe to
a publication as opposed to a single article.
Now that you know the three roles that SQL Servers can play in replication and that
data is replicated as articles that are stored in publications, you need to learn the types
of replication.

UNDERSTANDING REPLICATION TYPES


Its important to control how publications are distributed to subscribers. If the newspaper
company doesnt control distribution, for example, many people may not get the paper when
they need it, or other people may get the paper for free. In SQL Server, you need to control

276 | Lesson 12

distribution of publications for similar reasons, so that the data gets to the subscribers when
its needed.
There are three basic types of replication: transactional, snapshot, and merge (all of which are
discussed in the following sections). Consider the following key factors when choosing a replication type:
Autonomy. The amount of independence your subscribers have over the data they
receive. Some servers may need a read-only copy of the data, whereas others may need to
be able to make changes to the data they receive.
Latency. How long a subscriber can go without getting a fresh copy of data from the
server. Some servers may be able to go for weeks without getting new data from the publisher, whereas other instances may require a very short wait time.
Consistency. The most popular form of replication may be transactional replication,
where transactions are read from the transaction log of the publisher, moved through
the distributor, and applied to the database on the subscriber. This is where transactional
consistency comes in. Some subscribers may need all the transactions in the same order
they were applied to the server, whereas other subscribers may need only some of the
transactions.
Once youve considered these factors, youre ready to choose the replication type that will
work best for you.

Introducing Transactional Replication


In transactional replication, individual transactions are replicated. Transactional replication is preferable when data modifications are to be replicated immediately, or when
transactions have to be atomic (either all or none applied). A primary key is required,
because each transaction is replicated individually. As described next, there are three key
types of transactional replication: standard, with updating subscribers, and peer-to-peer.

USING STANDARD TRANSACTIONAL REPLICATION


All data modifications made to a SQL Server database are considered transactions, regardless of
whether they have an explicit BEGIN TRAN command and corresponding COMMIT TRAN
(if the BEGIN . . . COMMIT isnt there, SQL Server assumes it). All of these transactions are
stored in a transaction log that is associated with the database. With transactional replication,
each of the transactions in the transaction log can be replicated. The transactions are marked
for replication in the log (because not all transactions may be replicated), and then theyre
copied to the distributor, where theyre stored in the distribution database until theyre copied
to the subscribers via the Distribution Agent.
The only drawback is that subscribers to a transactional publication must treat the data as read
only, meaning that users cant make changes to the data they receive. Think of it as being like
a subscription to a newspaperif you see a typo in an ad in the paper, you cant change it
with a pen and expect the change to do any good. No one else can see your change, and youll
get the same typo in the paper the next day. Transactional replication has high consistency,
low autonomy, and middle-of-the-road latency. For these reasons, transactional replication is
usually used in server-to-server environments.

USING TRANSACTIONAL WITH UPDATING SUBSCRIBERS


This type of replication is almost exactly like transactional replication, with one major difference:
The subscribers can modify the data they receive.
The two types of updatable subscriptions are immediate and queued. Immediate updating
means what it saysthe data is updated immediately. For this sort of update to occur at the
subscriber, the publisher and subscriber must be connected. In queued updating, the publisher
and subscriber dont have to be connected to update data at the subscribers, and updates can
be made while either is offline.

Designing a Data-Archiving Solution | 277

When data is updated at a subscriber, the update is sent to the publisher when next connected.
The publisher then sends the data to other subscribers as they become available.
Because the updates are sent asynchronously to the publisher, the publisher or another
subscriber may have updated the same data at the same time, resulting in conflicts when
updates are applied. All conflicts are detected and resolved through a conflict-resolution
policy defined when the publication is created.

USING PEER-TO-PEER TRANSACTIONAL REPLICATION


Transactional replication also uses peer-to-peer replication to support updating data by
subscribers. This method is designed for applications (as opposed to another SQL Server)
that may modify the data at any of the databases participating in replication. An example is
an online shopping application that modifies the contents of a database with each order or
purchase (for example, updating the mailing lists, changing the inventory, and so on).
A key difference between standard (read-only) transactional replication or transactional replication with updating subscriptions and peer-to-peer transactional replication is that the latter
isnt hierarchical. Instead, all the nodes are peers, and each node publishes and subscribes to
the same schema and data. Hence, each node contains identical schema and data.

INTRODUCING SNAPSHOT REPLICATION


Whereas transactional replication copies only data changes to subscribers, snapshot replication
copies entire publications to subscribers every time it replicates. In essence, it takes a snapshot
of the data and sends it to the subscriber every time it replicates. This is useful for servers that
need a read-only copy of the data and dont require updates very oftenthey could wait for
days or even weeks for updated data.

TAKE NOTE

Snapshot replication
is principally used to
establish the initial set
of data and database
objects for merge
and transactional
publications.

A good example of where to use this type of replication is in a department store chain that
has a catalog database. The headquarters keeps and publishes the master copy of the database
in which changes are made. The subscribers can wait for updates to this catalog for a few days
if necessary.
The data on the subscriber should be treated as read only here as well because all the data
will be overwritten each time replication occurs. This type of replication is said to have high
latency, high autonomy, and high consistency.
Snapshots are created using the Snapshot Agent and stored in a snapshot folder on the publisher.
Snapshot Agent runs under SQL Server Agent at the distributor and can be administered
through Management Studio.

INTRODUCING MERGE REPLICATION


This is by far the most complex type of replication to work with, but its also the most flexible.
Merge replication allows changes to be made to the data at the publisher as well as at all the
subscribers. These changes are then replicated to all other subscribers until your systems reach
convergence, the blessed state at which all your servers have the same data. Because of its
flexibility, merge replication is typically used in server-to-client environments.
The biggest problem with merge replication is known as a conflict. This problem occurs
when more than one user modifies the same record on their copy of the database at the same
time. For example, suppose a user in Florida modifies record 25 in a table at the same time
that a user in New York modifies record 25 in their copy of the table. A conflict will occur
on record 25 when replication takes place, because the same record has been modified in two
different places; SQL Server has two values from which to choose. Conflict-resolution priority
is specified through the New Subscription Wizard or in Management Studio. You can also
use Management Studios Replication Conflict Viewer tool to examine and resolves conflicts.
Careful attention must be given to how conflicts are resolved.
Merge replication works by adding triggers and system tables to the databases on all the servers
involved in the replication process. When a change is made at any of the servers, the trigger fires
and stores the modified data in one of the new system tables, where it resides until replication

278 | Lesson 12

occurs. This type of replication has the highest autonomy, highest latency, and lowest transactional
consistency.

MANAGING A REPLICATION TOPOLOGY


After youve configured replication for your archival data, you should establish a replication
topology and include the following activities in your design:
Develop and test a backup-and-restore strategy. As discussed in Lesson 11, all databases
should be backed up on a regular basis, and the ability to restore those backups should be
tested periodically. Replicated databases are no different. The following databases should be
backed up regularly:

Publication database
Distribution database
Subscription databases
msdb database and master database at the publisher, distributor, and all subscribers

Script the replication topology. All replication components in a topology should be scripted
as part of a disaster-recovery plan. The scripts can also be used to automate repetitive tasks.
A script contains the T-SQL system stored procedures necessary to implement the replication
component(s), such as a publication or subscription. Scripts can be stored with backup files
to be used in case a replication topology must be reconfigured.

TAKE NOTE

Scripts can be created in a wizard (such as the New Publication Wizard) or in SQL Server
Management Studio after you create a component. You can view, modify, and run the
script using SQL Server Management Studio or sqlcmd.
A component should be rescripted if any property changes are made. If you use custom stored
procedures with transactional replication, a copy of each procedure should be stored with the
scripts and the copy should be updated if the procedure changes.
Understand replication performance. Before replication is configured, you need to review
and understand the factors that affect replication performance and how to manage them:

Server and network hardware


Database design
Distributor configuration
Publication design and options
Filter design and use
Subscription options
Snapshot options
Agent parameters
Maintenance

Establish a performance baseline. After replication is configured, you should use Replication
Monitor and System Monitor to determine how replication behaves with your typical workload and topology. Determine typical values for the following five dimensions of replication
performance:
Latency. The amount of time it takes for a data change to be propagated between nodes
in a replication topology
Throughput. The amount of replication activity (measured in commands delivered over
a period of time) a system can sustain over time
Concurrency. The number of replication processes that can operate on a system
simultaneously

Designing a Data-Archiving Solution | 279

Duration of synchronization. How long it takes a given synchronization to complete


Resource consumption. Hardware and network resources used as a result of replication
processing

TAKE NOTE

Latency and throughput are most important in transactional replication because transactional replication generally requires low latency and high throughput. Concurrency and
duration of synchronization are most relevant to merge replication, because systems built
on merge replication often have a large number of subscribers, and a publisher can have a
significant number of concurrent synchronizations with these subscribers.
Create thresholds and alerts. Replication Monitor allows you to set a number of thresholds
related to status and performance. Its recommended that you set the appropriate thresholds for
your topology; if a threshold is reached, a warning is displayed, and, optionally, an alert can be
sent to an e-mail account, a pager, or another device. Note that SQL Server replication provides a number of predefined alerts that respond to replication agent actions. Administrators
can use these alerts to stay informed about the state of the replication topology.
Monitor the replication topology. Monitoring a replication topology is an important part of
deploying replication. Because replication activity is distributed, its essential to track activity
and status across all computers involved in replication. Replication Monitor is the most important tool for monitoring replication, allowing you to monitor the overall health of a replication
topology. T-SQL and Replication Management Objects (RMO) provide interfaces for monitoring replication. System Monitor can also be useful for monitoring replication performance.

CERTIFICATION READY?
Imagine various
business scenarios.
Under what conditions
do merge, snapshot,
and transactional
replication make sense?
For example, what if
you have unreliable
cross country network
connectivity? What if
local host users keep
changing the archived
data? What if the host
servers keep crashing?

Validate data periodically. Validation isnt required by replication, but its recommended that
you run validation periodically for transactional replication and merge replication. Validation
lets you verify that data at the subscriber matches data at the publisher. Successful validation
indicates that at a point in time, all changes from the publisher have been replicated to the
subscriber (and from the subscriber to the publisher, if updates are supported at the subscriber),
and the two databases are in sync.
Adjust publication and distribution retention periods if necessary. Transactional replication and merge replication use retention periods to determine, respectively, how long
transactions are stored in the distribution database, and how frequently a subscription must
synchronize. You should monitor your topology to determine whether the current settings
require adjustment. For example, in the case of merge replication, the publication retention
period (which defaults to 14 days) determines how long metadata is stored in system tables.
If subscriptions always synchronize within five days, consider adjusting the setting to a lower
number to reduce the amount of metadata and possibly provide better performance.
Understand how to modify publications if application requirements change. After youve
created a publication, it may be necessary to add or drop articles or change publication and
article properties. Most changes are allowed after a publication is created, but in some cases,
its necessary to generate a new snapshot for a publication and/or reinitialize subscriptions to
the publication.

DESIGNING A REPLICATION STRATEGY


In addition to deciding whether replication is appropriate to your archival plan, you should
also perform the following tasks:
Determine the requirements for replication with archival data. The archival data
to be replicated may be centralized at one location or dispersed at multiple locations.
There may be requirements for multiple copies of the archival data, or specific types of
access may be required. You should also estimate the maximum allowable latency for
distributing an update that is made on a publisher to its subscribers. Keep in mind that
the network and communication infrastructure must be able to support the maximum
latency. You may be able to reduce network usage by placing the distributor closer to the

280 | Lesson 12

LAB EXERCISE

Perform Exercise 12.1 in your


lab manual.

subscribers. For example, if you place the publisher in one location and all the subscribers in
another location, you can place the distributor near the subscribers to reduce the longdistance network traffic across slower speed WAN links.
Select a replication type. The degree of consistency between the database on a publisher and
the replicated database on a subscriber depends on whether you use snapshot, transactional,
or merge replication. For example, if you use snapshot replication, the subscriber receives
periodic snapshots of the publication stored on the publisher. As a result, the database on
the subscriber is inconsistent with the database on the publisher. Conversely, transactional
replication distributes the publication to the subscriber with low latency. Therefore, the
database on the subscriber is more consistent with the database on the publisher.
Create a replication topology diagram. A replication topology diagram helps you
understand the flow of data between replication partners. If a diagram of the existing
replication topology isnt available, you must create a diagram and update it each time you
change the topology. The diagram should depict all the servers involved in replication and
the databases they host, the role of the servers, and the direction of data flow between
the servers.
For each database, identify the direction of replication. If a table exists in several databases,
ensure that the table isnt replicated multiple times through different paths. To identify
the fastest path between replication partners, consider the speed of the network connection
and its usage level.
Determine the distributors. Determine the placement of a distributor with respect to
the corresponding publisher. You can place the distributor and the publisher on separate
servers, or you can place them on the same server.
Determine subscribers. Based on the data requirements of each subscriber, you can
identify the publications it requires. In addition, you must determine whether the
subscriber should be allowed to modify the published data and return the modified data
to the publisher. In general, this isnt a concern with archival data, because the subscriber
is the archive server, which should not ever convert the original data on its own.
Choose either push or pull subscription. If you configure the distribution agent on
the distributor, the replication process is called push subscription. Conversely, if you
configure the distribution agent on a subscriber, the replication process is called pull
subscription. When determining whether to use push or pull subscription, you should
consider the number of subscribers and the memory and CPU requirements of each
subscriber. Typically, push subscription is used to minimize the resource utilization on the
subscribers. If you want to offload the distribution agent overhead from the distributor to
the subscribers, pull subscription is a better choice.
Determine the security requirements. When determining the security requirements of
your replication topology, make sure to identify security accounts for replication agents
and the File Transfer Protocol (FTP) access rights for replication.

In Exercise 12.1, youll learn how to apply the replication principles youve learned about in
this Lesson to the Yanni HealthCare Services scenario.

ACCESSING ARCHIVED DATA


Now that youve designed your archive plan, decided what data will be archived, as well as
where and how the data will be archived, how do you access that data when you need it?
Typically, you make an application archive-aware by inserting a small row in the active table
keyed to the archived row. When a user requests this archive row, a process is set in place to
deal with the request. This may involve a simple query to another table, a DVD library, a
message sent to a computer-room attendant asking her to mount a tape, or a message sent
back to the original user asking if they really need these data and explaining any time delays.
If the archived data is still in the database but has just been moved to an alternate location,
then getting to the data will be almost transparent to the users. On the other extreme, when
the archived data is stored on tapes that have to be retrieved from storage and mounted manually, the end user may decide that the delay is such that they dont need the data after all.

Designing a Data-Archiving Solution | 281

S K I L L S U M M A RY
In this Lesson, youve examined the topic of archiving data as it relates to database infrastructure.
You learned the role of business, regulation, and accessibility requirements and how they
shape the ultimate design of the data-archiving plan. Youve learned how to identify data for
archiving and how to plan for data-archive access.
Throughout the Lesson, youve seen how to select from different storage formats and develop
a data-movement strategy. Additionally, youve become familiar with the role of replication in
archiving data and how to design a replication topology.
Most important, in this Lesson, as in the rest of the book, youve learned that database infrastructure design is a complex series of interactions involving myriad variables. Youve also learned that
this seemingly overwhelming and difficult process is manageable. All it requires is a basic
understanding of SQL Server, an ability to grasp the elements, and a systematic approach to the
challenges they present. In the final analysis, designing a database infrastructure (or any element
of it) requires that you understand your constraints and requirements and then follow a careful
process to achieve your goal: a well-designed SQL Server database server infrastructure.
For the certification examination:

Understand the different types of replication. Know how replication works and when the
different types are best used. Be sure to understand latency, autonomy, and consistency.

Understand business and regulatory requirements. Know what the different business and
regulatory requirements are and how they affect your archival storage needs.

Understand the different components of a replication topology. Be sure to understand


what publishers, distributors, subscribers, articles, and publications are. You also need to
know how they interact in a replication topology.

Understand the advantages and disadvantages of different storage media and formats. To
effectively select from a number of options, you need to know the different storage types,
the pros and cons of each, and when theyre most appropriate. For example, tape and optical
media are low-cost, long-term storage choices for data that is never queried, but they are
not a good option for data that needs to be accessed in a very short period of time.

Knowledge Assessment
Case Study
Developing an Archive Plan
Thylacine Savings & Loan Association is a large financial institution serving
approximately 1.6 million customers over a broad geographic area. The company is
headquartered in the city of Trevallyn, which also serves as northern headquarters,
with 407 employees. Three branch offices are located in Stratford (Eastern operations),
Belleville (Western), and Rock Hill (Southern).
The company currently has a 3 TB OLTP database that tracks more than 2 billion
transactions each year. The main database for all transactions and operations is located
in Trevallyn. Regional databases contain deposit/withdrawal information only and the
headquarters database is updated daily from the regional offices.
Thylacines departmental database servers are dispersed throughout the headquarters
location. The company is currently experiencing a 4 percent annual growth and plans
to expand into four new markets at the rate of one new market every two years. The

282 | Lesson 12

database is growing at a rate of 6 percent per year and will exceed available hard-disk space
in the future. Additionally server capacity is overloaded, resulting in poor performance
and long delays. A large portion of the database data is historical information.
Youve been asked to develop a data-archiving plan for their ATM transactions. Once an ATM
transaction has been recorded, it becomes read only. In the event of an error, a correcting
transaction is entered at a later date. The company wants to maintain only the current
months data in the online database and to archive the remainder to read-only media.
Government regulations require that the company maintain the previous seven years worth
of records of all ATM transactions and that the data be available within 24 hours.

Multiple Choice
Circle the letter or letters that correspond to the best answer or answers.
Use the information in the previous case study to answer the following questions:
1. Fill in the following table (you may need to modify it) to show the online and archived
data-accessibility requirements. Create time divisions, and classify the data based on
those divisions. Ensure that the data classification meets the query requirements.
D ATA S OURCE

A CCESSIBILITY R EQUIREMENT

S TORAGE F ORMAT

Online
Archived
Offline

2. Fill in the following table, summarizing your proposed data-movement schedule.

D ATA M OVEMENT

FREQUENCY

3. Which of the following business requirements should be considered when designing an


archival data strategy as called for by Thylacine Savings & Loan Association? (Choose
all that apply.)
a. Cost
b. Government/Industry regulations
c. Accessibility requirements
d. Granularity
4. Which data structure is appropriate if you wish to maintain the historical context of the
archival data, but you cannot archive all the related data together?
a. Partitioned tables
b. Normalized tables
c. Denormalized tables
d. Summary tables

Designing a Data-Archiving Solution | 283

5. If the requirements for the case study were to 1) maintain 36 months of data online
for immediate access for queries and updates and 2) maintain a total of 7 years of data
to meet accounting and reporting requirements, which of the following is the most
appropriate storage format plan?
a. Place the current 36 months worth of data on an OLTP database server and the
previous 4 years worth of data on an archive server.
b. Place all the data on the OLTP server, and use partitioning to separate the data
between the current 36 months and the remaining 48 months.
c. Place the current 36 months of data on an OLTP server and the remainder on tape.
d. Use summary tables to reduce the load on the OLTP server, and store all detailed
data on an archive server.
6. The data-movement strategy should contain which of the following steps? (Choose all
that apply.)
a. Verification that data has been copied to the destination storage format
b. Means to ensure the security of data during movement
c. Specification of the frequency of data movement
d. Scheduling of data movement to minimize impact on the production server
7. Which of the following roles can a single server have in a replication topology?
a. Distributor
b. Publisher
c. Subscriber
8. Enhancements to SQL Server 2005 that simplify administration of a replication
topology include which of the following? (Choose all that apply.)
a. Schema changes can be automatically sent to subscribers without using special stored
procedures.
b. Specific tables of a database can be replicated, not necessarily the whole database.
c. A number of wizards in Management Studio make it much simpler to set up the
replication topology, once its designed.
d. All of the above.
9. You are a database administrator for a small company in Tasmania. You maintain a
Sales database that contains a SalesTransactions table. Requirement are that 36 months
of this table data must be stored online in the Sales database and that older data must
be sent to an archive database. Which of the following is the best way to structure the
SalesTransactions table?
a. Partitioned view
b. Table partitioning
c. Denormalization
d. Summary tables
10. Refer to the previous question. Which archival frequency would you use?
a. Daily
b. Monthly
c. Quarterly
d. Annually

This page intentionally left blank

Glossary

active directory (AD): The operating


systems directory service that contains
references to all objects on the network.
Examples include printers, fax machines,
user names, user passwords, domains,
organizational units, computers etc.
archive: A repository containing historical
records that are intended for long-term
preservation.
assembly: A managed application module
that contains class metadata and managed
code as an object in SQL Server. By referencing an assembly, common language
runtime (CLR) functions, CLR stored
procedures, CLR triggers, user-dened
aggregates, and user-dened types can be
created in SQL Server.
asymmetric key: In cryptology, one key,
mathematically related to a second key,
is used to encrypt data while the other
decrypts the data.
audit: An independent verication of truth.
budgetary constraint: Limits placed on your
ability to invest as much as you might
wish in an infrastructure improvement
project.
business continuity plan (BCP): A policy
that denes how an enterprise will maintain normal day-to-day operations in the
event of business disruption or crisis.
camelCase: A method or standard for naming
objects. With camelCase, all characters
are lowercased except the rst letter of
component words other than the rst
word. An example of camelCase would be:
customerAddress.
capacity: A measure of the ability to store,
manipulate and report information collected for the enterprise. Excess capacity
suggests too much investment in infrastructure or a declining business need.
certicate: A digital document (electronic
le) provided by a trusted authority to
give assurance of a persons identity;
certicates verify a given public key
belongs to a stipulated individual or
organization.
common language runtime (CLR): A key
component of the .NET technology provided by Microsoft that handles the actual
execution of program code written in any
one of many .NET languages.

constraint: A property assigned to a table


column that prevents certain types of
invalid data values from being placed in
the column. For example, a UNIQUE or
PRIMARY KEY constraint prevents you
from inserting a value that is a duplicate
of an existing value, a CHECK constraint
prevents you from inserting a value that
does not match a specied condition, and
NOT NULL prevents you from leaving
the column empty (NULL) and requires
the insertion of some value.
convention: A convention is a set of agreed,
stipulated, or generally accepted norms or
criteria, often taking the form of a custom.
cryptology: The study or practice of both
cryptography (enciphering and deciphering) and cryptanalysis (breaking or cracking a code system or individual messages).
data control language (DCL): A set of SQL
commands that manipulate the permissions that may or may not be set for one
or more objects.
data denition language (DDL): A subset
of T-SQL commands which create, alter,
and delete structural objects such as tables,
users, and indexes in SQL Server.
data manipulation language (DML):
A subset of T-SQL commands which
manipulate data within objects in SQL
Server. These are the regular T-SQL commands such as INSERT, UPDATE, and
DELETE.
database: A collection of information, tables,
and other objects organized and presented to serve a specic purpose, such as
searching, sorting, and recombining data.
Databases are stored in les.
database mirroring: A technology for
continuously copying all data in a database from one server to another so that
in the event that the principal server fails,
the secondary server can take over the
processing of transactions using its copy
of the database.
decision tree: A decision tree is a technique for
determining the overall risk associated with
a series of related risks; that is, its possible
that certain risks will only appear as a result
of actions taken in managing other risks.
deploying: Migrating and stabilizing your
database servers in the consolidated
environment.

developing: Designing a database migration


plan for the consolidated environment,
creating a solution, and testing the pilot.
disaster recovery plan (DRP): A policy that
denes how people and resources will be
protected in the case of a natural or manmade disaster and how the organization
will recover from the calamity.
encryption key: A seed value used in an
algorithm to keep sensitive information
condential by changing data into an
unreadable form.
envisioning: Gathering information to
analyze a dispersed environment and identifying potential consolidation problems.
execution context: Execution context is
represented by a login token and one or
more user tokens (one user token for each
database assigned). Authenticators and
permissions control ultimate access.
extent: a unit of space allocated to an object.
A unit of data input and output; data is
stored or retrieved from disk as an extent
(64 kilobytes).
failover: A switch between the active and
standby duplicated systems that occurs
automatically without manual intervention. Sometimes known as switchover.
legroup: A named collection of one or more
data les that forms a single unit of data allocation or for administration of a database.
format: The organization of data stored on
some form of media. This could be a SQL
Server backup format, a simple TXT le
containing comma-separated value (CSV)
data, or some other form of organization
of the data.
high availability: High availability is the continuous operation of systems. For a system
to be available, all components including
application and database servers, storage
devices and the end-to-end network need
to provide uninterrupted service.
horizon: A forecasting target. A horizon too
far distant may result in capacity or other
changes that dont prove needed; a horizon
too near may result in investments that
dont meet tomorrows needs.
index: In a relational database, a database
object that provides fast access to data in

285

286 | Glossary
the rows of a table, based on key values.
Indexes can also enforce uniqueness on
the rows in a table. SQL Server supports
clustered and nonclustered indexes. The
primary key of a table is automatically
indexed. In full-text search, a full-text
index stores information about signicant
words and their location within a given
column.
instance: A separate and isolated copy of
SQL Server running on a server. Application service providers can support multiple businesses and their database needs
while guaranteeing one business cannot
see the others data.
log shipping: A technology for high availability that is based on the normal
backup and restore procedures that exist
with SQL Server. In this environment,
transaction-log backups are made on the
principal server and then copied to the
secondary server.
media: The physical item used to store data.
Tapes are a common form of media as
are individual optical storage items such
as CDs and DVDs. The type of media
used must match the physical hardware
device used for reading from and writing
to the media. As an example, an AIT tape
cartridge (the media) must only be used in
an AIT type tape drive.
media retention: A period of time such as
a year, a month, or a week for which any
backup media is not altered and kept
in that state in which it was created.
After this retention period the media is
allowed to be reused for another new
backup.
merge replication: A method of replication
which transfers data from one database to
one or more other databases. Data can be
changed in more than one location. This
may cause conicts to arise.
method: A specic means of action to accomplish a stipulated goal or objective.
mirror database: The passive or secondary
database in a mirroring conguration.
Also known as the secondary database.
object: An object is an allocated region of
storage; an object is named; if the database
structure has a name, its an object. Examples include database, table, attribute,
index, view, stored procedure, trigger, and
so on.
organizational unit: An object within Active
Directory that may contain other objects
such as other organizational Units (OUs),
users, groups, and computers.

page: a unit of data storage. Eight pages


comprise an extent.
PascalCase: A method or standard for naming objects. With PascalCase, all characters are lowercased except the rst letter
of each component word. An example of
PascalCase would be: CustomerAddress.
permission: An access right to an object
controlled by GRANT, REVOKE, and
DENY data control language commands.
planning: Evaluating the data you gathered
in the previous phase and creating a
specication to consolidate SQL Server
instances.
policies: A set of written guidelines providing
direction on how to process any number
of issues; e.g., a corporate password policy.
principal database: The active database in a
mirroring conguration.
principal server: A machine that during
normal operating conditions provides the
services that a service such as SQL Server
offers.
quorum: The majority of servers in a mirroring conguration. A quorum of two
servers determines which database is the
principal server. In a normal situation, the
principal database and the witness form
a quorum that keeps this primary server
functioning as the primary database in a
mirroring conguration.
recovery model: A database option that
species how the write ahead transaction log records events; the options are
simple, bulk logged, and full. These
settings inuence your protection
against data loss.
regulatory requirements: A set of compliance directions from an external organization. This could be a governmental agency
(e.g., the regulator of the Sarbanes-Oxley
Act or HIPPA) or your corporate headquarters.
replication: Replication is a set of technologies for copying and distributing data and
database objects from one database to
another and then synchronizing between
databases to maintain consistency.
role: A SQL Server security account that is
a collection of other security accounts
that can be treated as a single unit when
managing permissions. A role can contain
SQL Server logins, other roles, and Windows logins or groups.
schema: Each schema is a distinct namespace
that exists independently of the database user who created it; a schema is a
container of objects. A schema can be

owned by any user and its ownership is


transferable.
scope: A division of SQL Servers security
architecture (principals, permissions and
securables) that places securables into
server-scope, database-scope and schemascope divisions.
secondary database: The passive or secondary database in a mirroring conguration.
Also known as the mirror database.
security measures: The steps taken to assure
data integrity.
security policy: The written guidelines to
be followed by all employees of the enterprise to protect data and resources from
unintended consequences. A security
policy, for example, should exist guiding
all users on how to protect their network
password.
services: Processes that run in the background of the operating system; analogous
to Daemons in Unix.
single point-of-failure: A vulnerability
whose failure leads to a collapse of the
whole.
snapshot replication: A method of replication that involves database snapshots.
This form of replication is not a high
availability solution.
standard: A standard establishes uniform
engineering or technical criteria, processes,
and practices usually in a formal, written
manner.
symmetric key: In cryptology, a single key is
used to both encrypt and decrypt data.
table: A two-dimensional object, which consists of rows and columns, that stores data
about an entity modeled in a relational
database.
topology: The manner in which the
components of a system are arranged or
interrelated, including adjacency and
connectivity.
transaction replication: A method of replication which transfers transactions from one
database to one or more other databases.
Changes to data are not allowed on the
receiving database(s).
view: An object dened by a SELECT statement that permits seeing one or more
columns from one or more base tables.
With the exception of instantiated views
(indexed views), views themselves do not
store data.
witness server: An optional third server
used in some mirroring congurations
to initiate the automatic failover within
seconds of the principal server failing.

Index

% disk time, 69
@loopback_detection, 229
64bit, 2223, 76

A
Access
auditing, 101102
categories, dening, 134
direct, 202
indirect through stored procedures, 202
indirect through views, 202
standards, 201203
Active/active three-node cluster, 217
Active Directory (AD)
as authentication mechanism, 9394
dened, 87, 89
Active/passive two-node cluster, 216
Address Windowing Extensions (AWE), 22
ALTER ENDPOINT statement, 142, 143
ALTER INDEX command, 35
ALTER statement, 137, 139, 141
ALTER TABLE command, 35
Antivirus software, 122
Applications
analyzing for consolidation, 6667
authentication roles, 95
domains, 162
migration, 7879
monitoring for consolidation, 6768
roles, 136137
Archive, dened, 265, 266
Archiving data, 265280
accessibility, 269, 271, 272, 280
business requirements, 267268
costs, 266, 269, 271
data-movement strategy, 273
deciding what to archive, 269272
disk space use, 266, 269
historical data, identifying, 269
interval, 269270
maintenance time, 266
merge replication, 277278
performance, 269
query performance, 266
reasons to, 266273
regulatory requirements, 267268
reliability, 272
replication topology, 274280
scenario, 267268
security, 272
shelf life, 271
snapshot replication, 277
storage media and format, 271272
structure of, 270271

table types, 270


transactional replication, 276277
Arrays
RAID, 3637, 231232, 233
SAN storage, 232233
Assemblies
creating, 161
dened, 150, 161
EXTERNAL_ACCESS, assemblies, setting, 146
SAFE assemblies, setting, 145146
trusted, 162
UNSAFE assemblies, setting, 146
Asymmetric key
dened, 107, 111112
mechanism, 132
Attacks
HTTP, 100
server, 99101
SQL Injection, 93, 100
virus, 122
Audit, dened, 87, 88
Authentication
Active Directory, as mechanism of, 9394
application roles, 95
certicate-based, 132
choosing a method, 94
groups and roles, 9496
impersonation and delegation, 9596
Kerberos, 9495
key-based, 132
logins, creating, 130131
network policies, 9697
security, administrative, 95
SQL Server, 9394, 108
Windows, 9394, 108
Availability, High. See High Availability
Average Disk Queue Length, 69
Avg. Disk Read/sec, 69
Avg. Disk Write/sec, 69

B
Backups
antivirus software, 122
business requirements, 252
compression, 35, 42
data, 243246
database categories based on recovery criteria, 252253
devices, 244
differential, 245
documenting, 256
legroup, 246, 251
les storage, designing, 4142
frequency of, 255
full, differential, and transaction log, 250251

287

288 | Index
Backups (continued)
full only, 244, 249250
full with differential, 250
full with transaction log, 250
partial and partial differential, 251
processes, 10
RAID, 41
recovery types, and, 253255
restore strategy, designing, and, 251253
security policy, 255256
storage, offsite, 255256
strategy, 249257
strategy, key steps, 252
transaction log backups, 42, 245246
validation and testing policy, 256257
very large databases (VLDBs), 246
Best practices
infrastructure, 14
password rules, 110
recovery plan, 260261
BitLocker, 256
Budgetary constraints, dened, 1, 5
Bulk log backup, 42
Business continuity plan (BCP), dened, 242, 257
Business requirements
archiving data, 267268
backup strategy, 252
capacity, 56
infrastructure, 56
views, 187188

C
Cache management, on-chip, 22
camelCase, dened, 194, 198
Capacity
analysis, 4
business requirements, 56
changing, 4
CPU, 11
dened, 1, 45
designing for, 613
disk space, 6
disk space, growth rate, 78
disk throughput, 7
estimation period, 7, 11
horizon, 6
memory usage, 1213
network trafc, 910
network requirements, 910
regulatory requirements, 5, 89
sources of, 6
storage requirements, 69
technical requirements, 45
Certicate, dened, 107, 110
Clients
database mirroring, 223
log shipping, 227228
network trafc, 10
Clustering, 77, 98, 214220
conguration, 216218
consolidation, and, 77
costs, 215

versus database mirroring, 220221


enhancements, 218219
four-node, 218
geographic design, 215, 219
hardware decisions, 219220
instances, 47
licensing costs, 220
multisite, 219
nodes, 215, 216218
reporting options, 235
requirements, 215
scope, 221
solution design, 216218
three-node active/active, 217
two-node active/passive, 216
WSC-certied hardware, 215
Cluster Validation Tool, 218
Collation conicts, 73
Column(s)
compression, 35
computed, 182
datatypes and constraints, 177
denormalization, 171172
encryption, 159161
ltering, 189
identity, 178
naming conventions, 197
normalization, 171, 175
SessionLoginName, 158
sparse, 3536
Common language runtime (CLR), dened, 150, 161
Common language runtime (CLR),
programming, 97
Common language runtime (CLR),
security, 161164
application domains, 162
assemblies, creating, 161
assemblies, trusted, 162
EXTERNAL_ACCESS, 162
module signing, 162163
policy, developing, 163164
resources, external, accessing, 162163
Compression, data. See Data compression
Conguration
approved, 14
business requirements, 56
clustering, 216218
database mirroring, 221224
hardware and software, 1325
holistic thinking, 3
infrastructure, 26
instances, tempdb, 5253
log shipping, 226227
network, clustering, 215
regulatory requirements, 5
standardization, 3
Conguration Manager, 5253
CONNECT statement, 141, 142
Consolidation
application migration, 7879
applications, envisioning, 6668
clustering, and, 77

Index | 289
costs, 6061, 64
data evaluation, 7475
deciding against, 6364
deciding for, 6063
decisions, initial, 7577
deploying (phase 4), 8283
developing (phase 3), 8082
environment, examination of, 6673
envisioning (phase 1), 5973
geographic, 65
geographical issues, 7172
goals, 6466
guidelines, 65
hardware acquisition, 8081
instance, 65
management, 6162
on-line analytical processing (OLAP), 65
on-line transaction processing (OLTP), 65
physical server, 65
pilot, 81
planning (phase 2), 7380
proof of concept, 81
resources, use of, 6263
return on investment, 62
risk factor, 64
scope creep, 7980
security, 61
service-level agreements, 71
storage, 65
sunk cost factor, 64
team formation, 5960
types, 65
Constraint(s)
budgetary, 1, 5
check, 181
column, 177
default, 181
dened, 168, 170
naming conventions, 197
prex, 196
unique, 181182
using, 180182
Contact list, 257258
CONTROL statement, 141
Convention, dened, 194
Convention, naming. See Naming convention
Conventions and standards. See Database, conventions
and standards
Costs
archiving data, 266, 269, 271
clustering, 215, 216, 220
consolidation, 6061
cooling capacity, 61
database recovery, 252
data loss, 252
electrical, 6061
high availability, 234
return on investment, 62
salaries, 60
security, 9293
soft versus hard, 61
storage, archival, 271

CPU
64-bit versus 32-bit, 2223, 76
afnity mask, 11
architecture, 2223
capacity, 11
clustering, 219220
consolidation, envisioning, 68
counters, 68
estimation period, 11
evaluating, 74
hot add, 2425
hyperthreading, 2223
monitoring for consolidation, 68
multicore, 22
operating system licenses, 11
as single point of failure, 211
sizing, 78
SQL Server editions, 23
types, 11, 2223
usage, 11
CREATE APPLICATION ROLE
command, 136
CREATE ASSEMBLY command, 145, 146
CREATE ENDPOINT command, 137, 143
CREATE INDEX command, 35
CREATE statement, 137, 139
CREATE TABLE command, 35, 36
CREATE USER statement, 132, 137
Cryptology, dened, 107, 111

D
Data archiving. See Archiving
Database, dened, 169
Database, development, location and role,
203204
Database, physical, design
default databases, 50
denormalization, 171172
documentation and diagramming, 172
legroups, 182184
index usage, 184186
information, gathering, 170
normalization, 171
objects, inventory, 171
planning, 170171
schema, documentation, 173
table design, 173182
views, 187189
Database, production, location and role,
203204
Database, system, location, 4950
Database, test, location and role, 203204
Database access
auditing, 101102
direct, 202
indirect through stored procedures, 202
indirect through views, 202
standards, 201203
Database conventions and standards, 194205
naming conventions, 195199. See also Naming conventions
standards, 200205. See also Standards, database
Database Diagram Designer, 173

290 | Index
Database les
groups, 45
location of, 4445
naming, 45
setting up, 44
size, 45
types, 44, 169
Database Maintenance Plan Wizard, 244
Database mirroring
client applications, 223
versus clustering, 220221
conguration, 221224
dened, 210, 220
endpoints, 224
enhancements, 225
hardware, 220
high availability, security, 98
high-availability mode, 222
high-performance mode, 222
high-protection mode, 222
modes, 222223
network trafc, 10
principal database, 221
protection levels, 222223
quorum, 221
reporting options, 235
scope, 221
secondary database, 221
server roles, designing, 221
solution design, 223
testing, 224225
transaction logs, 38
witness server, 221
Database restore. See Restoring databases
Database size, estimating, 3337
capacity, planning for, 34
data compression, 3435
RAID, 3637
sparse columns, 3536
Database standard(s)
access, 201203
dened, 194, 200
deployment process, 203205
security, 205
Transact-SQL coding, 200201
Data center, cooling capacity, 61
Data compression
mirroring, 225
page, 35
row, 35
Data control language (DCL)
commands, 152
dened, 150
Data denition language (DDL), dened, 129,
Data denition language (DDL) triggers, 137139. See also Triggers, DDL
Data les
log, 31, 44, 169
primary, 3132, 44, 169
secondary, 31, 32, 44, 169
Data manipulation language (DML), 129
Data recovery
backing up data, 243246

backup strategy, 249257


mitigation plans, 257261
model, choosing, 253255
restoring databases, 246249
transaction logs, and, 38
Datatypes
built-in, 177180
column, 177
user-dened, 180
DBCC SQLPERF (LOGSPACE), 3940
DDL triggers. See Triggers, DDL
Decision tree, dened, 242
DecryptByKeyAutoCert() function, 112
Delegation and impersonation, 9596
Denormalization, database, 171172
DENY statement, 136, 141
Deploying, consolidation, 8285
dened, 58, 82
stafng, 83
Deployment process, database, 203205
production data, protecting, 204
recording changes, 204205
staff responsibilities, 204
Developing, consolidation, 8082
dened, 58, 80
design reexamination, 82
hardware acquisition, 8081
pilot, 8182
proof of concept, 81
Development database, role and location of, 203204
Differential backups, 245, 250251
Disaster recovery plan (DRP), dened, 242, 257
Disk mirroring. See Mirroring
Disk performance
monitoring, 6970
RAID, 69
Disk space
archival, 266, 269
compound growth, 8
geometric growth, 8
growth rate, projecting, 78
linear growth, 8
sizing, 78
storage capacity, 67
Disk striping. See Striping
Disk subsystem, 75
Documentation
access standards, 203
backups, 256
database, physical, and diagramming, 172
infrastructure, 258
mitigation plan, 258
naming conventions, 199
schema, 173
T-SQL coding standards, 201
Domain integrity, 171, 181
Domain Name System (DNS), 237
Downtime, 236
DROP DATABASE <database>, 247
DROP statement, 137, 139
Dynamic afnity, 11
Dynamic management views, 12

Index | 291
E
Execution context, dened, 150, 155
Encrypting File System (EFS) 256
Encryption
cell-level, 256
certicates, 112
column-level, 159161
database-level, 140141, 256
deploying, 160161
hierarchy, 111
keys, 114115
keys, choosing, 160
performance issues, 112113
policy, 113, 140141
symmetric and asymmetric keys, 111112
transparent data encryption (TDE), 140141
triggers, 139141
Windows Server level, 111115
Encryption key, dened, 107, 111
Endpoint(s)
database mirroring, 224
default protocol, 141
policies, dening, 143144
service broker and database mirroring endpoints, 143
SOAP/Web Service endpoints, 142143
TDS endpoints, 142
Entity integrity, 171, 180181
Envisioning, consolidation, 5973
applications, 6668
costs, 6061
CPU, 68
deciding against, 6364
deciding for, 6063
dened, 58, 59
disk performance, 6970
environment, examination of, 6673
geographical issues, 7172
goals, 6466
guidelines, 65
issues, 71, 73
management, centralized, 6162
memory, 6869
resources, use of, 6263
return on investment, 62
security, 61
service-level agreements, 71
SQL Server-specic metrics, 7071
systems, associated, 7273
team formation, 5960
EXECUTE AS CALLER, 155
EXECUTE AS OWNER, 156
EXECUTE AS SELF, 156
EXECUTE AS <user_name>, 155
Execution context, specifying, 155159
auditing, 158159
EXECUTE AS, implementing for an object, 155156
EXECUTE AS, implementing in batches, 157158
EXECUTE AS policy, batches, developing, 159
EXECUTE AS policy, objects, developing, 156
NO REVERT, 157
NO REVERT COOKIE, 158
scope of the EXECUTE AS statements, 157

Extended stored procedures (XPs), 73


Extensible Key Management (EKM), 114115
Extent(s)
dened, 30, 31
storage, physical, 32, 33

F
Failover
automatic versus manual, 214
cluster, 215
database mirroring, 222223
dened, 210, 212, 213
delay, 221
planned, 224225
three-node, 217
unplanned, 225
Fault tolerance, designing, 233
Filegroup (s)
backups, 251
dened, 30, 32
designing, 182184
partitioning, 184
performance, 183
recoverability, 184
setting up, 45
Filenames, setting up, 45
File size, setting up, 45
Firewall, 124125
Five nines, 211
Format, dened, 265
Full backups, 244, 249251
Full-Text Search, 73

G
Geographic consolidation, 65
Global Allocation Map (GAM), 31, 33
GRANT statement, 136, 141

H
Hard disk, speed versus memory speed, 32
Hardware
acquiring, 8081
clustering, 215, 219220
conguration, 1325
consolidation, 8081
database mirroring, 220
failure, 243
partitioning, 175
sizing, 78
WSC-certied, 215
Hardware Security Module (HSM), 114
High Availability (HA)
backups, protecting, 101
clustering, 214220
consolidation, planning, 7677
costs, 234
database mirroring, 213, 220225
dened, 210, 214
dynamic name system (DNS), 237
failover, 212213
goals, 212213

292 | Index
High Availability (HA) (continued)
HTTP attacks, guarding against, 100
limitations, 213214
log shipping, 213, 226228
migration strategy, 235237
password cracking, 100101
replication, 213, 228230
for reporting purposes, 235
server attacks, 99
single points of failure, 211212
solution design, 233235
source code, managing, 100
SQL injection attacks, 100
stafng, 234235
storage, 230233
technologies, 211214
training costs, 77
Horizon, dened, 1, 6
Horizontal partitioning, 175
HTTP requests, responding to, 142
Human malevolence, 244
Hyperthreading, 2223

I
Impersonation and delegation, 9596
Index Allocation Map (IAM), 31, 33
Indexed Sequential Access Method (ISAM), 31
Index(es)
access speed, 185186
clustered, 184185
Database Tuning Advisor, 186
dened, 169
designing, 184186
naming conventions, 197
nonclustered, 184185
placement, physical, 186
Infrastructure, database server
designing, 125
capacity requirements, 613
conguration, current, analyzing, 26
software versions and hardware congurations, 1325
Instance(s)
assigning ports to, 53
clustering, 47
congurations, specifying, 5253
consolidation, 65, 76
default, 4546
dened, 30
designing, 4546
named, 46, 97
naming conventions, 4748
number of, 4647
service requirements, establishing, 52
system databases, location of, 4950
Integrity, data, 171

K
Kerberos
as authentication method, 9495
endpoints, using, 142
Keys, encryption
asymmetric, 107, 111112

choosing, 114, 160


encryption, 107, 111, 114115, 160
managing, 114
symmetric, 111112
surrogate, 113
Keys, tables
foreign, 176177
primary, choosing, 176
specifying, 175

L
.ldf le extension, 44
Local service account, 117, 118
Local system account, 117, 119
Logins
creating, 130131
system adminstrator (sa), 130
Log shipping
advantages and disadvantages, 226
antivirus software, and, 122
client applications, reconnecting, 227228
conguration, 226227
dened, 210, 226
high availability, 226228
reporting options, 235
roles, choosing, 226227
roles, switching, 227
security, 98
transaction log, 38

M
.mdf le extension, 44
Media, dened, 265
Media and format, storage, 271272
Media retention, dened, 242, 255
Memory
analyzing, 1213
consolidation, envisioning, 6869
consolidation, planning, 7475
direct addressable, 22
evaluating, 7475
forecasting and planning for, 13
growth rate, 13
monitoring, for consolidation, 6869
options, choosing, 2324
sizing, 78
speed versus hard disk speed, 32
System Monitor counters, 12
usage, assessing, 1213
Memory: Available Bytes, 12, 68
Memory: Pages/sec, 12
Memory: Paging File: % Usage, 68
Merge replication
archiving data, 277278
dened, 210, 229
high availability, 230
Method, dened, 194, 204
Microsoft Cluster Services (MSCS), 214
Microsoft Operations Manager (MOM), 13
Microsoft Solutions Framework (MSF), phases, 5859
Migration
address abstraction, implementing, 237

Index | 293
consolidation, application, 7879
domain issues, 79
downtime, minimizing, 236
of DTS packages to Integration
Services, 79
high availability, 235237
logins, 78
security issues, 79
testing, 236
training, 237
users, 78
Mirror database, dened, 210, 221
Mirroring, 36, 231. See also Database mirroring
Mitigation plan, 257261
business continuity plan, 257
contact list, 257258
database loss scenarios, 259
decision tree, 258259
disaster recovery plan, 257
documentation, 258
information, categorizing, 257
recovery decision tree, 260
recovery plan best practices, 260261
recovery steps priorities, 259260
recovery success criteria, 258
msdb, 70
Multicore CPU, 22
Multiple Active Result Sets (MARS), 51

N
Naming conventions
arguments against, 196
bad practices, 198199
benets of, 195196
for database objects, 196197
documenting and communicating, 199
establishing and disseminating, 196199
instances, 4748
pitfalls, avoiding, 198199
vendors, 199
.ndf le extension, 44
.NET Assembly security, designing, 145146
EXTERNAL_ACCESS, assemblies,
setting, 146
SAFE assemblies, setting, 145146
UNSAFE assemblies, setting, 146
Network diagrams, 9
Network policies, 9697
Network service account, 117, 119
Network trafc
analyzing, 910
bottlenecks, 10
between clients and servers, 10
forecasting and planning for, 10
identifying, 9
between servers, 10
NORECOVERY option, 247248
Normalization
database, 171
tables, 175
NO_TRUNCATE option, 247, 250
NTFS le system, 44

O
Object(s)
dened, 169
EXECUTE AS, 155156
naming conventions, 196197
permission, 154
as a security level, 89
Object-level security, 150164
common language runtime (CLR),
161164
encryption, column level, 159161
execution context, specifying, 155159
permissions, existing, analyzing, 154155
permissions, strategy, developing, 151153
On-line analytical processing (OLAP), 65
On-line transaction processing (OLTP), 65
Operating system
CPU licensing, 11
installation location, 43
version and edition, choosing, 1420
Organizational unit, dened (OU), 87
Orphaned users, 132

P
Page, dened, 30, 32
Page Free Space, 31, 33
Pages, 3233
Parallelism, on-processor, 22
Partitioning, tables, 174175
PascalCase, dened, 194, 198
Password
best practices, 110
change, enforced at next login, 110
cracking, 100101
expiration, enforced, 109
options, 108109
policy, 109
Performance
archiving data, 269
consolidation, 6263, 6970
database mirroring, 222
disk, monitoring, 6970
encryption, 112113
legroup, 183
query, 266
tempdb, 63
Permission(s)
ALTER, 151, 153
CREATE, 151, 153
Data Control Language (DCL) commands, 152
dened, 150, 151
DENY, 151, 152, 155
DROP, 151
existing, analyzing, 154155
GRANT, 152, 154
list of, 152154
object, 154
REVOKE, 152, 155
roles, 151
specic, applying, 153
strategy, developing, 151153
VIEW DEFINITION, 154

294 | Index
Planning, consolidation, 7380
application migration, 7879
clustering, 77
data evaluation, 7477
decisions, initial, 7576
dened, 58, 73
disk subsystem, 75
hardware, sizing, 78
high availability, 7677
instances, multiple, choosing, 76
iterations, multiple, going through, 77
memory data evaluation, 7475
processor data evaluation, 74
scope creep, 7980
64-bit SQL Server, 76
upgrade advisor, 76
Policies
dened, 1, 5
encryption, 113, 140141
endpoint, 143144
EXECUTE AS, for batches, 159
EXECUTE AS, for objects, 156
network, authentication, 9697
password rules, 109
security, backup strategy, 255256
security, common language runtime, 163164
security, dened, 87, 88
security, encryption, 140141
triggers, DLL, dening, 139
validation and testing, backup strategy,
256257
Primary key, choosing, 176
Principle database, dened, 210, 221
Principle server, dened, 210, 212
Process: Private Bytes: sqlservr process, 68
Process: SQ: Server process: % Processor Time, 68
Process: Working Set, 12
Processor: % Processor Time, 68
Production data, protecting during deployment, 204
Production database,
data integrity, 204
role and location of, 203204
Proxies, 144145
Pure log backup, 42

Q
Quorum
dened, 210, 221
drives, 122

R
RAID
0, 3637, 231
1, 3637, 231
5, 37, 231232
10, 37, 232
backups, 41
database size, 3637
designing, 232
disk performance, monitoring, 69
levels, choosing, 223
transaction log les, 40

RAM, hot add, 2425


RAM. See Memory
RECOVER option, 247248
Recovery, data. See Data recovery
Recovery model
bulk-logged, 39, 254255
dened, 242, 253
full, 39, 253254
simple, 39, 253
Redundant Array of Inexpensive Disks. See RAID
Referential integrity, 171, 181
Regulatory requirements
archiving data, 267268
capacity, infrastructure, 5, 89
conguration, 5
dened, 1, 5
longevity, 89
privacy/security, 9
Replication. See also entries for specic replication types
antivirus software, and, 122
archival, 274280
conicts, 229230
consolidation, 72
dened, 265
merge, 210, 229, 230, 277278
network trafc, 10
publisher/subscriber metaphor, 275
reporting options, 235
security, high availability, 98
snapshot, 229, 277
strategy, design, 279280
topology design, 274280
topology management, 278279
transactional, 38, 229230, 276277
types, 275276
Resources, shared, 73
Restoring databases, 246249
options, 247
piecemeal restore, 249
point-in-time restore, 248
standard restore, 247248
steps, general, 247
RESTRICTED_USER option, 247
Role(s)
application, 136137
authentication, 9496
database, granting, 133137
dened, 129
xed, 134135
groups, and, 9496
log shipping, 226227
mapping database users to, 132
permissions, 131137
proxies, 144145
public, 135, 136
server, designing, 221
server, granting, 131132
SQL Server Agent, job, 144145
user-dened, 135136
Rollback plan, 204
Row splitting, tables, 175
Run book, 204

Index | 295
S
SAN storage, 75, 232233
Schema(s)
dbo, 134
dened, 129, 133
naming conventions, 197
naming structure, 133
securing, 133134
as a security level, 89
Scope, dened, 129
Scope creep, 7980
Secondary database, dened, 210, 221
Secure Socket Layer (SSL). See SSL
Security
administrative, 95
analyzing and designing, 87106
antivirus software, 122
application domains, 162
application roles, 95
archiving data, 272
assemblies, creating, 161
assemblies, trusted, 162
auditing access, 101102
authentication, 9397
backups, protecting, 101
benets, 93
CLR, 161164
conicts, 9192
costs, 9293
database level, 8991, 135137, 140141
DDL triggers, 137139
encryption, column level, 159161
encryption, policy, 110115, 140141
endpoints, securing, 141144
execution context, specifying, 155159
rewalls, 96, 124125
high availability, 97101
HTTP attacks, 100
impersonation and delegation, 9596
levels, 8991
logins, creating, 130131
module signing, 162163
.NET assembly, 145146
object-level, 150164
password cracking, 100
password rules, 108110
permissions, existing, analyzing, 154155
permissions strategy, developing, 151153
policies, 99, 163164
recommendations, making, 102
requirements, gathering, 8893
resources, external, accessing, 162163
reviews, performing, 102
risk factors, 93
roles, database, granting, 134137
roles, server, granting, 131132
roles, SQL Server Agent, job, 144145
schema level, 8991
schemas, securing, 133134
scope, 8990
server level, 8991, 129134
service level, 8991

services, working with, 123124


source code, managing, 100
SQL Injection attacks, 100
SQL Server service accounts, 115121
standards, 205
technical, 99
Windows server-level, 107128
Security measures, dened, 1, 5
Security policy, dened, 87, 88
Security scope, 8990
Servers, linked, consolidation, 72
Servers, physical
consolidation, 65
instances and system databases, 4950
numbers needed, 4950
security of, 125
Service accounts, 115121
changing, 119121
choosing, 117118
domain user, 118
local service, 117, 118
local system, 117, 119
network service, 117, 119
service rights, 121
Service-level agreement (SLA), 65, 71
Service-level and database-level security, 129146
DDL triggers, 137139
encryption policy, 140141
endpoints, securing, 141144
logins, creating, 130131
.NET assembly, 145146
roles, database, granting, 134137
roles, server, granting, 131132
roles, SQL Server Agent, job, 144145
schemas, securing, 133134
Service Principal Name (SPN), 95
Services, SQL Server
dened, 107, 115116
list of, 3, 43, 116117
modes, 123124
TCP port numbers, 124
working with, 123124
Single point of failure
dened, 210, 211
high availability, 211212
Snapshot replication
archiving data, 277
dened, 210, 229
Software versions and hardware congurations, 1325
best practices, 14
CPU, type, choosing, 2223
hot add CPUs and RAM, 2425
memory options, choosing, 2324
operating system, versions and editions, choosing,
1420
SQL Server, editions, choosing, 2022
storage requirements, determining, 24
Sparse columns, 3536
SQL Server 2005
choosing, 21
CPU type and speed, 23
editions, 15, 21, 23

296 | Index
SQL Server 2005 (continued)
hard disk space requirements, 24
instances per edition and component, 47
services, list of, 43
storage requirements, 24
SQL Server 2008
backup compression, 42
choosing, 2122
clustering enhancements, 218219
CPU type and speed, 23
data compression, 3435, 43
datatypes, table, 180
editions, 1620, 2122, 23, 24
Extensible Key Management, 114115
hard disk space requirements, 24
hot add CPUs and RAM, 2425
instances, 48
mirroring enhancements, 225
RAM, 24
sparse columns, 3536
software modules, disk space, 24
storage requirements, 24
transparent data encryption (TDE),
140141, 256
SQL Server Authentication, 9394, 108
SQL Server: Buffer Manager, 12
SQL Server: Buffer Manager: Buffer Cache Hit Ratio, 68
SQL Server: Buffer Manager: Stolen Pages and Reserved
Pages, 68
SQL Server: Buffer Manager: Page Life Expectancy, 12
SQL Server: Cache Manager: Cache Hit Ratio, 70
SQL Server: Databases: Database Size: tempdb, 70
SQL Server: Databases: Transactions/sec, 70
SQL Server: General Statistics: User Connections, 70
SQL Server hierarchy, 90
SQL Server Integration Services (SSIS), 97
SQL Server: Memory Manager: Total Server Memory,
12, 68
SQL Server metrics, monitoring, 7071
SQL Server Service Executables, location of, 4344
SQL Server Services
dened, 107, 115116
list of, 3, 43, 116117
modes, 123124
TCP port numbers, 124
working with, 123124
SSL encrypted connections, 97
Staff
consolidation, 83
contact list, 257258
deployment process, 204
disaster situations, 234235
high availability, 234235
training, 237
Standard(s), database
access, 201203
dened, 194, 200
deployment process, 203205
security, 205
Transact-SQL coding, 200201
Storage, archival
accessibility, 271, 272

costs, 271
data movement strategy, 273
media type and format, 271273
reliability, 272
security, 272
shelf life, 271
technology, changing, 272
Storage, high availability, 230233
fault tolerance, 233
RAID arrays, 231232, 233
SAN storage array, 232233
Storage, media and format, 271272
Storage, physical, 3057
backup-le storage, 4143
concepts, 3133
database size, 3337
data compression, 3435
data les, 3132
extents, 33
le placement, 4445
le types, 3132
instances, 4548
operating system, location of, 43
pages, 3233
planning for, 34
RAID, 3637
servers, number needed, 4950
sparse columns, 3536
tempdb database, 5053
transaction logs, 3132, 3741
SQL Server Service Executables, 4344
Storage capacity
analyzing, 69
disk space, 67
disk throughput, 7
locations and roles of database servers, 7
requirements, determining, 24
Storage consolidation, 65
Stored procedures
database access, indirect, 202
execute permission, 153
extended (XPs), 73
naming conventions, 196, 197198
sp_change_users_login, 132
sp_congure, 4
sp_estimate_data_compression_savings, 35
sp_ prex, 198
sp_unsetapprole, 137
usp_ prex, 198
Striping, 36, 231
Striping with parity, 37, 231232
Sunk cost factor, 64
Surface Area Conguration Manager, 73
Symmetric key
algorithms, 111
dened, 107
sys.dm_exec_cached_plans, 12
sys.dm_exec_query_stats, 12
sys.dm_os_memory_clerks, 12
sys.dm_os_memory_objects, 12
System Center Operations Manager (SCOM), 13
System Monitor counters, 12

Index | 297
T
Table(s)
for archiving, 270
computed columns, 182
constraints, 180182
datatypes, built-in SQL Server, 177180
datatypes, column, 177
datatypes, user-dened, 180
dened, 169, 173
denormalized, 270
design, 173182
integrity, 180181
keys, foreign, 176177
keys, primary, 176
keys, specifying, 175
location, physical, 182
naming conventions, 197
normalized, 175, 270
partitioned, 270
partitioning, 174175
row splitting, 175
summary, 270
Tail log backup, 42
TAKE OWNERSHIP statement, 141
TCP ports, 53, 124
Technical requirements, capacity, 45
tempdb
Conguration Manager, 5253
disk performance, 70
instance congurations, 5253
services, 52
size, 5152, 76
usage, 76
Test database, role and location of, 203204
Topology, dened, 265
Transactional replication
@loopback_detection, 229
archiving data, 276277
dened, 210
high availability, 229230
Transaction log(s),
adding or enlarging, 40
backups, 42, 245246, 250251
database mirroring, 38
data recovery, 3839
designing, 3738
FILEGROWTH setting, 39
les, 31, 32
le size, managing, 3841
log shipping, 38
RAID, 40
recoverability, 32
recovery, 39
replication, 38
shrinking, 40

space use, monitoring, 3940


storage, 4041
truncating, 39
write-ahead, 32, 37
Transact-SQL (T-SQL) coding, 200201
Transparent data encryption (TDE), 140141
Triggers, DDL, 137139
events, 138139
policy, dening, 139
scope, 137138
Triggers, DML, 139
Triggers, naming conventions, 197

U
Upgrade Advisor, 76
User-dened functions, naming conventions, 197
User-dened integrity, 171

V
Vertical partitioning, 175
VIEW DEFINITION, 141, 154
View(s)
backward compatibility, 188
business requirements, 187188
database access, indirect, 202203
data customization, 188
data import/export, 188
data manipulation, 188
dened, 169, 187
designing, 187189
dynamic management, 12
ltering, row and column, 189
indexed, 188
naming conventions, 197
partitioned, 188189
standard, 188
types, 188189
user data, 188

W
Windows authentication, 9394, 108
Windows Server Catalog (WSC), 215
Windows server-level security, 107128
antivirus software, 122
asymmetric keys, 112
certicates, 112
encryption policy, 110115
rewalls, server, 124125
password rules, 108110
services, working with, 123124
SQL Server service accounts, 115121
symmetric keys, 111
Witness server, dened, 210, 221
Write-ahead log, 32, 37

S-ar putea să vă placă și