Documente Academic
Documente Profesional
Documente Cultură
Exam Preparation document. Throughout the document the URL’s to the original sources can be
found. To be shure to have the latest information, read the online document. To make studying easer
I tried to pick the information that seems to be the most important or relevant for the exam.
Used Sources:
Selecting a SQL Server option in Azure: Azure SQL Database (PaaS) or SQL Server on Azure VMs (IaaS)
https://azure.microsoft.com/en-us/documentation/articles/data-management-azure-sql-database-
and-sql-server-iaas
Learn how each option fits into the Microsoft data platform and get help matching the right
option to your business requirements. Whether you prioritize cost savings or minimal
administration ahead of everything else, this article can help you decide which approach
delivers against the business requirements you care about most.
When designing an application, four basic options are available for hosting the SQL Server part of the
application:
SQL Server on non-virtualized physical machines
SQL Server in on-premises virtualized machines (private cloud)
SQL Server in Azure Virtual Machine (public cloud)
Azure SQL Database (public cloud)
The following table summarizes the main characteristics of SQL Database and SQL Server on Azure
VMs:
You do not want to employ You have IT resources for support and
Resources IT resources for support maintenance.
and maintenance of the
underlying infrastructure.
You want to focus on the
application layer.
In addition to built-in fault SQL Server on Azure VMs lets you to set up a high
Business tolerance infrastructure availability and disaster recovery solution for your
continuity capabilities, Azure SQL database’s specific needs. Therefore, you can
Database provides features, have a system that is highly optimized for your
such as Point in Time application. You can test and run failovers by
Restore, Geo-Restore, and yourself when needed. For more information, see
Geo-Replication to increase [High Availability and Disaster Recovery for SQL
business continuity. For Server on Azure Virtual Machines]((../virtual-
more information, see [SQL machines/virtual-machines-sql-server-high-
Database business availability-and-disaster-recovery-solutions.md).
continuity overview](sql-
database-business-
continuity.md).
Your on-premises With SQL Server on Azure VMs, you can have
Hybrid application can access data applications that run partly in the cloud and
cloud in Azure SQL Database. partly on-premises. For example, you can extend
your on-premises network and Active Directory
Domain to the cloud via [Azure Virtual
Network](../virtual-network/virtual-networks-
overview.md). In addition, you can store on-
premises data files in Azure Storage using [SQL
Server Data Files in Azure]
(http://msdn.microsoft.com/library/dn385720.as
px). For more information, see [Introduction to
SQL Server 2014 Hybrid
Cloud](http://msdn.microsoft.com/library/dn606
154.aspx).
Supports disaster recovery for on-premises SQL
Server applications using [SQL Server Backup and
Restore with Azure Blob Storage]
(http://msdn.microsoft.com/library/jj919148.asp
x) or [AlwaysOn replicas in Azure VMs](../virtual-
machines/virtual-machines-sql-server-high-
availability-and-disaster-recovery-solutions.md).
The Active Geo-Replication feature implements a mechanism to provide database redundancy within
the same Microsoft Azure region or in different regions (geo-redundancy). Active Geo-Replication
asynchronously replicates committed transactions from a database to up to four copies of the
database on different servers. The original database becomes the primary database of the
continuous copy. Each continuous copy is referred to as an online secondary database. The primary
database asynchronously replicates committed transactions to each of the online secondary
databases. While at any given point, the online secondary data might be slightly behind the primary
database, the online secondary data is guaranteed to always be transactionally consistent with
changes committed to the primary database. Active Geo-Replication supports up to four online
secondaries, or up to three online secondaries and one offline secondary.
One of the primary benefits of Active Geo-Replication is that it provides a database-level disaster
recovery solution. Using Active Geo-Replication, you can configure a user database in the Premium
service tier to replicate transactions to databases on different Microsoft Azure SQL Database servers
within the same or different regions. Cross-region redundancy enables applications to recover from a
permanent loss of a datacenter caused by natural disasters, catastrophic human errors, or malicious
acts.
Another key benefit is that the online secondary databases are readable. Therefore, an online
secondary can act as a load balancer for read workloads such as reporting. While you can create an
online secondary in a different region for disaster recovery, you could also have an online secondary
in the same region on a different server. Both online secondary databases can be used to balance
read only workloads serving clients distributed across several regions.
Database migration: You can use Active Geo-Replication to migrate a database from one server to
another online with minimum downtime.
Application upgrades: You can use the online secondary as a fail back option.
To achieve real business continuity, adding redundancy between datacenters to relational storage is
only part of the solution. Recovering an application (service) end-to-end after a disastrous failure
requires recovery of all components that constitute the service and any dependent services.
Examples of these components include the client software (for example, a browser with a custom
JavaScript), web front ends, storage, and DNS. It is critical that all components are resilient to the
same failures and become available within the recovery time objective (RTO) of your application.
Therefore, you need to identify all dependent services and understand the guarantees and
capabilities they provide. Then, you must take adequate steps to ensure that your service functions
during the failover of the services on which it depends. For more information about designing
solutions for disaster recovery, see Designing Cloud Solutions for Disaster Recovery Using Active Geo-
Replication.
Automatic Asynchronous Replication: After an online secondary database has been seeded, updates
to the primary database are asynchronously copied to the online secondary database automatically.
This means that transactions are committed on the primary database before they are copied to the
online secondary database. However, after seeding, the online secondary database is transactionally
consistent at any given point in time.
NOTE:
Asynchronous replication accommodates the latency that typifies wide-area networks by which
remote datacenters are connected.
Multiple online secondary databases: Two or more online secondary databases increase redundancy
and protection for the primary database and application. If multiple online secondary databases
exist, the application will remain protected even if one of the online secondary databases fails. If
there is only one online secondary database, and it fails, the application is exposed to higher risk until
a new online secondary database is created.
Readable online secondary databases: An application can access an online secondary database for
read-only operations using the same security principals used for accessing the primary database.
Continuous copy operations on the online secondary database take precedence over application
access. Also, if the queries on the online secondary database cause prolonged table locking,
transactions could eventually fail on the primary database.
User-controlled termination for failover: Before you can failover an application to an online
secondary database, the continuous copy relationship with the primary database must be
terminated. Termination of the continuous copy relationship requires an explicit action by the
application or an administrative script or manually via the portal. After termination, the online
secondary database becomes a stand-alone database. It becomes a read-write database unless the
primary database was a read-only database. Two forms of Termination of a Continuous Copy
Relationship are described later in this topic.
NOTE:
Active Geo-Replication is only supported for databases in the Premium service tier. This applies for
both the primary and the online secondary databases. The online secondary must be configured to
have the same or larger performance level as the primary. Changes to performance levels to the
primary database are not automatically replicated to the secondaries. Any upgrades should be done
on the secondary databases first and finally on the primary. For more information about changing
performance levels, see Changing Performance Levels. There are two main reasons the online
secondary should be at least the same size as the primary. The secondary must have enough capacity
to process the replicated transactions at the same speed as the primary. If the secondary does not
have, at minimum, the same capacity to process the incoming transactions, it could lag behind and
eventually impact the availability of the primary. If the secondary does not have the same capacity as
the primary, the failover may degrade the application’s performance and availability.
Local data redundancy and operational recovery are standard features for Azure SQL Database. Each
database possesses one primary and two local replica databases that reside in the same datacenter,
providing high availability within that datacenter. This means that the Active Geo-Replication
databases also have redundant replicas. Both the primary and online secondary databases have two
secondary replicas. However, the primary replica for the secondary database is directly updated by
the continuous copy mechanism and cannot accept any application-initiated updates. The following
figure illustrates how Active Geo-Replication extends database redundancy across two Azure regions.
The region that hosts the primary database is known as the primary region. The region that hosts the
online secondary database is known as the secondary region. In this figure, North Europe is the
primary region. West Europe is the secondary region.
If the primary database becomes unavailable, terminating the continuous copy relationship for a
given online secondary database makes the online secondary database a stand-alone database. The
online secondary database inherits the read-only/read-write mode of the primary database which is
unchanged by the termination. For example, if the primary database is a read-only database, after
termination, the online secondary database becomes a read-only database. At this point, the
application can fail over and continue using the online secondary database. To provide resiliency in
the event of a catastrophic failure of the datacenter or a prolonged outage in the primary region, at
least one online secondary database needs to reside in a different region.
You can only create a continuous copy of an existing database. Creating a continuous copy of an
existing database is useful for adding geo-redundancy. A continuous copy can also be created to copy
an existing database to a different Azure SQL Database server. Once created the secondary database
is populated with the data copied from the primary database. This process is known as seeding. After
seeding is complete each new transaction is replicated after it commits on the primary.
For information about how to create a continuous copy of an existing database, see How to enable
Geo-Replication.
Due to the high latency of wide area networks, continuous copy uses an asynchronous replication
mechanism. This makes some data loss unavoidable if a failure occurs. However, some applications
may require no data loss. To protect these critical updates, an application developer can call the
sp_wait_for_database_copy_sync system procedure immediately after committing the transaction.
Calling sp_wait_for_database_copy_sync blocks the calling thread until the last committed
transaction has been replicated to the online secondary database. The procedure will wait until all
queued transactions have been acknowledged by the online secondary database.
sp_wait_for_database_copy_sync is scoped to a specific continuous copy link. Any user with the
connection rights to the primary database can call this procedure.
NOTE:
The delay caused by a sp_wait_for_database_copy_sync procedure call might be significant. The
delay depends on the length of the queue and on the available bandwidth. Avoid calling this
procedure unless absolutely necessary.
The continuous copy relationship can be terminated at any time. Terminating a continuous copy
relationship does not remove the secondary database. There are two methods of terminating a
continuous copy relationship:
Planned Termination is useful for planned operations where data loss is unacceptable. A planned
termination can only be performed on the primary database, after the online secondary database
has been seeded. In a planned termination, all transactions committed on the primary database are
replicated to the online secondary database first, and then the continuous copy relationship is
terminated. This prevents loss of data on the secondary database.
Unplanned (Forced) Termination is intended for responding to the loss of either the primary
database or one of its online secondary databases. A forced termination can be performed on either
the primary database or the secondary database. Every forced termination results in the irreversible
loss of the replication relationship between the primary database and the associated online
secondary database. Additionally, forced termination causes the loss of any transactions that have
not been replicated from the primary database. A forced termination terminates the continuous
copy relationship immediately. In-flight transactions are not replicated to the online secondary
database. Therefore, a forced termination can result in an irreversible loss of any transactions that
have not been replicated from the primary database.
NOTE:
If the primary database has only one continuous copy relationship, after termination, updates to the
primary database will no longer be protected.
For more information about how to terminate a continuous copy relationship, see Recover an Azure
SQL Database from an outage.
Database Files:
File Description
Primary The primary data file contains the startup information for the database and points to
the other files in the database. User data and objects can be stored in this file or in
secondary data files. Every database has one primary data file. The recommended file
name extension for primary data files is .mdf.
Secondary Secondary data files are optional, are user-defined, and store user data. Secondary
files can be used to spread data across multiple disks by putting each file on a different
disk drive. Additionally, if a database exceeds the maximum size for a single Windows
file, you can use secondary data files so the database can continue to grow.
The recommended file name extension for secondary data files is .ndf.
Transaction The transaction log files hold the log information that is used to recover the database.
Log There must be at least one log file for each database. The recommended file name
extension for transaction logs is .ldf.
File Groups:
Filegroup Description
Primary The filegroup that contains the primary file. All system tables are allocated to the
primary filegroup.
User- Any filegroup that is specifically created by the user when the user first creates or later
defined modifies the database.
SQL Server Data Files in Microsoft Azure enables native support for SQL Server database files stored
as Microsoft Azure Blobs. It allows you to create a database in SQL Server running in on-premises or
in a virtual machine in Microsoft Azure with a dedicated storage location for your data in Microsoft
Azure Blob Storage. This enhancement especially simplifies to move databases between machines by
using detach and attach operations. In addition, it provides an alternative storage location for your
database backup files by allowing you to restore from or to Microsoft Azure Storage. Therefore, it
enables several hybrid solutions by providing several benefits for data virtualization, data movement,
security and availability, and any easy low costs and maintenance for high-availability and elastic
scaling.
This topic introduces concepts and considerations that are central to storing SQL Server data files in
Microsoft Azure Storage Service.
For a practical hands-on experience on how to use this new feature, see Tutorial: Using the Microsoft
Azure Blob storage service with SQL Server 2016 databases .
The following diagram demonstrates that this enhancement enables you to store SQL Server
database files as Microsoft Azure blobs in Microsoft Azure Storage regardless of where your server
resides.
Easy and fast migration benefits: This feature simplifies the migration process by moving
one database at a time between machines in on-premises as well as between on-premises
and cloud environments without any application changes. Therefore, it supports an
incremental migration while maintaining your existing on-premises infrastructure in place. In
addition, having access to a centralized data storage simplifies the application logic when an
application needs to run in multiple locations in an on-premises environment. In some cases,
you may need to rapidly setup computer centers in geographically dispersed locations, which
gather data from many different sources. By using this new enhancement, instead of moving
data from one location to another, you can store many databases as Microsoft Azure blobs,
and then run Transact-SQL scripts to create databases on the local machines or virtual
machines.
Cost and limitless storage benefits: This feature enables you to have limitless off-site storage
in Microsoft Azure while leveraging on-premises compute resources. When you use
Microsoft Azure as a storage location, you can easily focus on the application logic without
the overhead of hardware management. If you lose a computation node on-premises, you
can set up a new one without any data movement.
High availability and disaster recovery benefits: Using SQL Server Data Files in Microsoft
Azure feature might simplify the high availability and disaster recovery solutions. For
example, if a virtual machine in Microsoft Azure or an instance of SQL Server crashes, you
can re-create your databases in a new machine by just re-establishing links to Microsoft
Azure Blobs.
Security benefits: This new enhancement allows you to separate a compute instance from a
storage instance. You can have a fully encrypted database with decryption only occurring on
compute instance but not in a storage instance. In other words, using this new enhancement,
you can encrypt all data in public cloud using Transparent Data Encryption (TDE) certificates,
which are physically separated from the data. The TDE keys can be stored in the master
database, which is stored locally in your physically secure on-premises machine and backed
up locally. You can use these local keys to encrypt the data, which resides in Microsoft Azure
Storage. If your cloud storage account credentials are stolen, your data still stays secure as
the TDE certificates always reside in on-premises.
Snapshot backup: This feature enables you to use Azure snapshots to provide nearly
instantaneous backups and quicker restores for database files stored using the Azure Blob
storage service. This capability enables you to simplify your backup and restore policies. For
more information, see File-Snapshot Backups for Database Files in Azure.
More info in the article SQL Server Data Files in Microsoft Azure …….
https://msdn.microsoft.com/en-US/library/dn385720.aspx
Encryption
o Always Encrypted
o TDE for SQL DB, TDE Perf (Intel NIS HW acceleration)
o Enhancements to Crypto
o CLE for SQL DB (Cell Level Encryption)
Auditing
o Enhancements to SQL Audit
Reporting and Analysis (also with power BI)
Audit outcome of transactions
Secure App Development
o Role level security
o Dynamic Data Masking
Always encrypted
https://msdn.microsoft.com/en-us/library/mt163865.aspx
https://channel9.msdn.com/Shows/Data-Exposed/SQL-Server-2016-Always-Encrypted
Allows customers to securely store sensitive data outside of their trust boundary.
Data remains protected from high-privileged, unauthorized users.
A customer has a client application and SQL Server both running on-premises, at their business
location. The customer wants to hire an external vendor to administer SQL Server. In order to protect
sensitive data stored in SQL Server, the customer uses Always Encrypted to ensure the separation of
duties between database administrators and application administrators. The customer stores
plaintext values of Always Encrypted keys in a trusted key store which the client application can
access. SQL Server administrators have no access to the keys and, therefore, are unable to decrypt
sensitive data stored in SQL Server.
A customer has an on-premises client application at their business location. The application operates
on sensitive data stored in a database hosted in Azure (SQL Database or SQL Server running in a
virtual machine on Microsoft Azure). The customer uses Always Encrypted and stores Always
Encrypted keys in a trusted key store hosted on-premises, to ensure Microsoft cloud administrators
have no access to sensitive data.
A customer has a client application, hosted in Microsoft Azure (e.g. in a worker role or a web role),
which operates on sensitive data stored also stored in Microsoft Azure. The customer uses Always
Encrypted to reduce security attack surface area (the data is always encrypted in the database and
on the machine hosting the database).
Always Encrypted supports two types of encryption: randomized encryption and deterministic
encryption.
Deterministic encryption uses a method which always generates the same encrypted value
for any given plain text value. Using deterministic encryption allows grouping, filtering by
equality, and joining tables based on encrypted values, but can also allow unauthorized users
to guess information about encrypted values by examining patterns in the encrypted column.
This weakness is increased when there is a small set of possible encrypted values, such as
True/False, or North/South/East/West region. Deterministic encryption must use a column
collation with a binary2 sort order for character columns.
Randomized encryption uses a method that encrypts data in a less predictable manner.
Randomized encryption is more secure, but prevents equality searches, grouping, indexing,
and joining on encrypted columns.
Use deterministic encryption for columns that will be used as search or grouping parameters, for
example a government ID number. Use randomized encryption, for data such as confidential
investigation comments, which are not grouped with other records and are not used to join tables.
Store data intented for many customers in a single database/table while at the same time restricting
row-level read & write access based on users’ execution context.
RLS Concepts:
RLS supports two types of security predicates.
Filter predicates silently filter the rows available to read operations (SELECT, UPDATE, and
DELETE).
Block predicates explicitly block write operations (AFTER INSERT, AFTER UPDATE, BEFORE
UPDATE, BEFORE DELETE) that violate the predicate.
A hospital can create a security policy that allows nurses to view data rows for their own
patients only.
A bank can create a policy to restrict access to rows of financial data based on the
employee's business division, or based on the employee's role within the company.
A multi-tenant application can create a policy to enforce a logical separation of each tenant's
data rows from every other tenant's rows. Efficiencies are achieved by the storage of data for
many tenants in a single table. Of course, each tenant can see only its data rows.
For SQL Server TCP Port 1433 must be opened in the Windows Firewall.
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-sql-server-
performance-best-practices
Azure Virtual Machines provide three types of disks: operating system (OS) disk, temporary disk, and
data disks. For a description of each disk type, see section Azure Infrastructure services fundamentals
in this article.
When placing your data and log files you should consider disk cache settings in addition to size limits.
For a description of cache settings, see section Azure Infrastructure services fundamentals in this
article.
While “Read Write” cache (default setting) for the operating system disk helps improve the overall
operating system performance, boot times and reducing the read latency for the IO patterns that OS
usually generates, we recommend that you do not use OS disk for hosting system and user database
files. Instead, we recommend that you use data disks. When the workload demands a high rate of
random I/Os (such as a SQL Server OLTP workload) and throughput is important to you, the general
guideline is to keep the cache set to the default value of “None” (disabled). Because Azure storage is
capable of more IOPS than a direct attached storage disk, this setting causes the physical host local
disks to be bypassed, therefore providing the highest I/O rate.
Temporary disk
Unlike Azure disks (operating system and data disks) which are essentially VHDs stored as page blobs
in Azure Storage, the temporary disk (labeled as D:) is not persistent and is not implemented using
Azure Storage. It is reserved by the operating system for the page file and its performance is not
guaranteed to be predictable. Any data stored on it may be lost after your virtual machine is restarted
or resized. Hence, we do not recommend the D: drive for storing any user or system database files,
including tempdb.
This section discusses the best practices and recommendations on data disk performance options
based on testing done by Microsoft. You should be familiar with how SQL Server I/O operations work
in order to interpret the test results reported in this section. For more information, see Pages and
Extents Architecture.
It is important to note that the results we provide in this section were achieved without SQL Server
High Availability and Disaster Recovery Solutions enabled (such as, AlwaysOn Availability Groups,
database mirroring or log shipping). We recommend that you deploy one of these features to maintain
multiple redundant copies of your databases across at least two virtual machines in an availability set
in order to be covered by the Azure Cloud Services, Virtual Machines, and Virtual Network Service Level
Agreement. Enabling any of these features affects performance, so you should consider incorporating
one of them in your own performance testing to get more accurate results.
As a general rule, we recommend that you attach maximum number of disks allowed by the VM size
(such as, 16 data disks for an A7 VM) for throughput sensitive applications. While latency may not
necessarily improve by adding more data disks when your workload is within the maximum IOPS limit,
the additional IOPS and bandwidth that you get from the attached additional disks can help to avoid
reaching the single disk 500 IOPS limit. Note that this might trigger throttling events that might increase
disk response times and disk latency.
In our performance tests, we’ve executed several SQL Server I/O measurements to understand data
disk response characteristics with respect to the typical I/O patterns generated by SQL Server based
on different kind of workloads. The results for a single disk configuration on an A7 VM instance are
summarized here:
Note: Because Azure Infrastructure Services is a multi-tenant environment, performance results may
vary. You should consider these results as an indication of what you can achieve, but not a guarantee.
We suggest you repeat these tests and measurements based on your specific workload.
If your workload exceeds or is close to the I/O performance numbers mentioned in the previous
section, we recommend that you add multiple disks (depending on your virtual machine size) and stripe
multiple disks in volumes. This configuration gives you the ability to create volumes with specific
throughput and bandwidth, based on your data and log performance needs by combining multiple
data disks together.
After you create a virtual machine in Azure, you can attach a data disk to it using either the Azure
Management Portal or the Add-AzureDataDisk Azure PowerShell cmdlet. Both techniques allow you
to select an existing data disk from a storage account, or create a new blank data disk.
If you choose to create a new blank data disk at the Management Portal, you can choose the storage
account that your virtual machine was created in but not a different storage account.
To place your existing data disk (.vhd file) into a specific storage account, you need to use the Azure
PowerShell cmdlets. The following example demonstrates how to update a virtual machine using the
Get-AzureVM and the Add-AzureDataDisk cmdlets. The Get-AzureVM cmdlet retrieves information on
a specific virtual machine. The Add-AzureDataDisk cmdlet creates a new data disk with specified size
and label in a previously created Storage Account.
For more information about Azure PowerShell cmdlets, see Azure PowerShell on MSDN and Azure
command line tools.
For Azure VMs running on Windows Server 2008 R2 and previous releases, the only striping technology
available is striped volumes for dynamic disks. You can use this option to stripe multiple data disks into
volumes that provide more throughput and bandwidth than what a single disk can provide.
Starting with Windows Server 2012, Storage Pools are introduced and operating system software RAID
capabilities are deprecated. Storage Pools enable you to virtualize storage by grouping industry-
standard disks into “pools”, and then create virtual disks called Storage Spaces from the available
capacity in the storage pools. You can then configure these virtual disks to provide striping capabilities
across all disks in the pool, combining good performance characteristics. In addition, it enables you to
add and remove disk space based on your needs.
During our tests, after adding a number of data disks (4, 8 and 16) as shown in the previous section,
we created a new storage pool by using the following Windows PowerShell command:
Next, we created a virtual disk on top of the new storage pool and specified resiliency setting and
virtual disk size.
Important Note: For performance, it is very important that the –NumberOfColumns parameter is set
to the number of disks utilized to create the underlying Storage Pool. Otherwise, IO requests cannot
be evenly distributed across all data disks in the pool and you will get suboptimal performance.
The –Interleave parameter enables you to specify the number of bytes written in each underlying data
disk in a virtual disk. We recommend that you use 256 KB for all workloads.
Lastly, we created and formatted the volume to make it usable to the operating system and
applications by using the following Windows PowerShell commands:
Get-VirtualDisk –FriendlyName VirtualDisk1 | Get-Disk | Initialize-Disk –Passthru | New-
Partition –AssignDriveLetter –UseMaximumSize | Format-Volume –AllocationUnitSize 64K
Once the volume created, it is possible to dynamically increase the disk capacity by attaching new data
disks. To achieve optimal capacity utilization, consider the number of columns your storage spaces
have and add disks in multiples of that number. See Windows Server Storage spaces Frequently Asked
Questions for more information.
Using Storage Pools instead of traditional Windows operating system striping in dynamic disks brings
several advantages in terms of performance and manageability. We recommend that you use Storage
Pools for disk striping in Azure Virtual Machines.
During our internal testing, we have implemented the following scenarios with different number of
disks as well as disk volume configurations. We tested the following scenarios with configurations of
4, 8 and 16 data disks respectively, and we observed increased IOPS for each data disk added as
expected:
We arranged multiple data disks as simple volumes and leveraged the Database Files and
Filegroups feature of SQL Server to stripe database files across multiple volumes.
We used Windows Server Storage Pools to create larger volumes, which contains multiple data
disks, and we placed database and log files inside these volumes.
It’s important to notice that using multiple data disks provides performance benefits but it creates
more management overhead. In addition, partial unavailability of one of the striped disks can result in
unavailability of a database. Therefore, for such configurations, we recommend that you consider
enhancing the availability of your databases using high availability and disaster recovery capabilities of
SQL Server as described in High Availability and Disaster Recovery for SQL Server in Azure Virtual
Machines.
The following tables summarize the results of tests that we performed using multiple data disks
configurations at Microsoft.
By using the newly introduced Intel-based A8 and A9 VM sizes, we repeated our IO performance tests
and noticed a significant increase in bandwidth and throughput for larger sequential IO requests. If
you use Intel-based A8 and A9 VM sizes, you can get a performance increase for 64 KB (and bigger)
read and write operations. If your workload is IO intensive, these new VM sizes (A8 and A9) can help
in achieving more linear scalability compare to smaller VM sizes, but always within the 500 IOPs per
disk boundaries. For more information, see About the A8 and A9 Compute Intensive Instances.
Based on our tests, we have made the following observations about the Azure Virtual Machine
environment:
Spreading your I/O workload across a number of data disks benefits smaller random
operations (more common in OLTP scenarios) where IOPS and bandwidth scale in a nearly
linear fashion.
As the I/O block size increases, for read operations adding more data disks does not result in
higher IOPS or bandwidth. This means that if your workload is read intensive with more
analytical queries, adding more disks will not necessarily help.
For write intensive workload, adding more data disks can increase performance in a nearly
linear fashion. This means that you can benefit from placing each transaction log for multiple
databases on a separate data disk.
For large sequential I/O block sizes (such as, 64 KB or greater), writes generally perform better
than reads.
For SQL Server Load D and DS VM’s can also be very interesting. Especially DS series where you have
Premium Storage (SSD) available for the data disks.
Depending on how you configure your storage, you should place and the data and log files for user and
system databases accordingly to achieve your performance goals. This section provides guidance on
how you should place database files when using SQL Server in Azure Virtual Machines:
Option 1: You can create a single striped volume using Windows Server Storage Spaces
leveraging multiple data disks, and place all database and log files in this volume. In this
scenario, all your database workload shares aggregated I/O throughput and bandwidth
provided by these multiple disks, and you simplify the placement of database files.
Individual database workloads are load balanced across all available disks, and you do not
need to worry about single database spikes or workload distribution. You can find the
graphical representation of this configuration below:
Option 2: You can create multiple striped volumes, each composed by the number of data
disks required to achieve specific I/O performance, and do a careful placement of user
and system database files on these volumes accordingly. You may have one important
production database with a write-intensive workload that has high priority, and you may
want to maximize the database and log file throughput by segregating them on two
separate 4 disk volumes (each volume providing around 2000 IOPs and 100 MB/sec). For
example, use:
This option can give you precise file placement by optimizing available IO performance.
You can find the graphical representation of this configuration below:
You can still create single disk volumes and leverage SQL Server files and filegroups placement for your
databases. While this can still offer some benefits in terms of flexible storage layout organization, it
introduces additional complexity and also limits single file (data or log) IO performance to a value that
a single Azure data disk can provide such as 500 IOPs and 60 MB/sec.
Although Azure data disks have different behaviors than traditional rotating spindles (,in which
competing random and sequential operations on the same disks can impact performance), we still
recommend that you keep data and log files in different paths to achieve dedicated IOPs and
bandwidth for them.
To help understand your IO requirements and performance while running your SQL Server workloads
on Azure Virtual Machines, you need to analyze the following three tools and combine the results
carefully:
- SQL Server IO statistics: They reflect the database management system view of the IO
subsystem.
- Windows Server Logical Disk Performance Counters: They show how the operating system
performs on IOs.
- Azure Storage Analytics: Azure hosts data disks’ VHD files in Azure Storage. You can turn on
logging and metrics for the storage account that hosts your data disks, and get useful
information such as the number of successful and failed requests, timeout, throttling, network,
authorization, and other errors. You can configure and get data from these metrics on the
Azure Portal, or via PowerShell, REST APIs, and .NET Storage Client library.
If your IO related stalls or wait types in SQL Server (manifesting as increased disk response
times in OS Perf Counters) are related to throttling events happening in Azure Storage. And,
If rebalancing your data and log files across different volumes (and underlying disks) can help
maintaining throughput and bandwidth between storage performance limits.
TempDB
As mentioned in section Azure virtual machine disks and cache settings, we recommend that you place
tempDB on data disks instead of the temporary disk (D:). Following are the three primary reasons for
this recommendation based on our internal testing with SQL Server test workloads.
Performance variance: In our testing, we noticed that you can get the same level of
performance you get on D:, if not more IOPS, from a single data disk. However, the
performance of D: drive is not guaranteed to be as predictable as the operating system or
data disk. This is because the size of the D: drive and the performance you get from it
depends on the size of the virtual machine you use, and the underlying physical disks
shared between all VMs hosted by the same server.
Configuration upon VM downtime situation: If the virtual machine gets shutdown down
(due to planned or unplanned reasons), in order for SQL Server to recreate the tempDB
under the D: drive, the service account under which SQL Server service is started needs to
have local administrator privileges. In addition, the common practice with on-premises
SQL deployments is to keep database and log files (including tempDB) in a separate folder,
in which case the folder needs to be created before SQL Server starts. For most customers,
this extra re-configuration overhead is not worth the return.
Performance bottleneck: If you place tempdb on D: drive and your application workloads
use tempDB heavily, this can cause performance bottleneck because the D: drive can
introduce constraints in terms of IOPS throughput. Instead, place tempDB on data disks to
gain more flexibility. For more information on configuration best practices for optimizing
tempdb, see Compilation of SQL Server TempDB IO Best Practices.
We strongly recommend that you perform your own workload testing before implementing a desired
SQL Server file layout strategy.
With Azure disks, we have observed a “warm-up effect” that can result in a reduced rate of throughput
and bandwidth for a short period of time. In situations where a data disk is not accessed for a period
of time (approximately 20 minutes), adaptive partitioning and load balancing mechanisms kick in. If
the disk is accessed while these algorithms are active, you may notice some degradation in throughput
and bandwidth for a short period of time (approximately 10 minutes), after which they return to their
normal levels. This warm-up effect happens because of the adaptive partitioning and load balancing
mechanism of Azure, which dynamically adjusts to workload changes in a multi-tenant storage
environment. You may observe similar effects in other widely known cloud storage systems as well.
For more information, see Azure Storage: A Highly Available Cloud Storage Service with Strong
Consistency.
This warm-up effect is unlikely to be noticed for systems that are in continuous use. But we
recommend you consider it during performance testing or when accessing systems that have been
inactive for a while.
Single vs. multiple storage accounts for data disks attached to a single VM
To simplify management and reduce potential risks of consistency in case of failures, we recommend
that you leave all the data disks attached to a single virtual machine in the same storage account.
Storage accounts are implemented as a recovery unit in case of failures. So, keeping all the disks in the
same account makes the recovery operations simple. There is no performance improvement if you
store data disks attached to a single VM in multiple storage accounts. If you have multiple VMs, we
recommend that you consider the storage account limits for throughput and bandwidth during
capacity planning. In addition, distribute VMs and their data disks to multiple storage accounts if the
aggregated throughput or bandwidth is higher than what a single storage account can provide. For
information on storage account limits, see Azure Storage Scalability and Performance Targets. For
information on max IOPS per disk, see Virtual Machine and Cloud Service Sizes for Azure.
NTFS volumes use a default cluster size of 4 KB. Based on our performance tests, we recommend
changing the default cluster size to 64 KB during volume creation for both single disk and multiple disks
(storage spaces) volumes.
Some I/O intensive workloads can gain performance benefits through data compression. Compressed
tables and indexes means more data stored in fewer pages, and hence require reading fewer pages
from disk, which in turn can improve the performance of workloads that are I/O intensive.
For a data warehouse workload running on SQL Server in Azure VM, we found significant improvement
in query performance by using page compression on tables and indexes, as shown in Figure 1.
Query Performance with Data Compression
250000 1000000
900000
200000 800000
Time (ms)
Reads
700000
150000 600000
500000
100000 400000
NONE PAGE
Figure 1 compares performance of one query with no compression (NONE) and page compression
(PAGE). As illustrated, the logical and physical reads are significantly reduced with page compression,
and so is the elapsed time. As expected, CPU time of the query does go up with page compression,
because SQL Server needs to decompress the data while returning results to the query. Your results
will vary, depending upon your workload.
For an OLTP workload, we observed significant improvements in throughput (as measured by business
transactions per second) by using page compression on selected tables and indexes that were involved
in the I/O intensive workload. Figure 2 compares the throughput and CPU usage for the OLTP workload
with and without page compression.
60
50
Transactions/sec)
40
30
20
10
0
NONE PAGE
Note that you may see different results when you test your workloads in Azure Virtual Machine
environment. But we recommend that you test data compression techniques for I/O intensive
workloads and then decide which tables and indexes to compress. For more information, see Data
Compression: Strategy, Capacity Planning and Best Practices.
For databases of any significant size, enabling instant file initialization can improve the performance
of some operations involving database files, such as creating a database or restoring a database, adding
files to a database or extending the size of an existing file, autogrow, and so on. For information, see
How and Why to Enable Instant File Initialization.
To take advantage of instant file initialization, you grant the SQL Server (MSSQLSERVER) service
account with SE_MANAGE_VOLUME_NAME and add it to the Perform Volume Maintenance Tasks
security policy. If you are using a SQL Server platform image for Azure, the default service account (NT
Service\MSSQLSERVER) isn’t added to the Perform Volume Maintenance Tasks security policy. In
other words, instant file initialization is not enabled in a SQL Server Azure platform image.
After adding the SQL Server service account to the Perform Volume Maintenance Tasks security
policy, restart the SQL Server service.
The following figure illustrates observed test results for creating and restoring a 100 GB database with
and without instant file initialization.
50
Time (minutes)
40
30
20
10
0
Create 100 GB database Restore 100 GB database
Many of the best practices when running SQL Server on premises are still relevant in Azure Virtual
Machines, including:
Moving your on-premises database to Azure SQL Database varies in complexity based on your
database and application design, and your tolerance for downtime. For compatible databases,
migration to Azure SQL Database is a straightforward schema and data movement operation
requiring few, if any, changes to the schema and little or no re-engineering of applications. Azure SQL
Database V12 brings near-complete engine compatibility with SQL Server 2014 and SQL Server 2016.
Most SQL Server 2016 Transact-SQL statements are fully supported in Microsoft Azure SQL Database.
This includes the SQL Server data types, operators, and the string, arithmetic, logical, cursor
functions, and the other Transact-SQL elements that most applications depend upon. Partially or
unsupported functions are usually related to differences in how SQL Database manages the database
(such as file, high availability, and security features) or for special purpose features such as service
broker. Because SQL Database isolates many features from dependency on the master database,
many server-level activities are inappropriate and unsupported. Features deprecated in SQL Server
are generally not supported in SQL Database. Databases and applications that rely on partially or
unsupported functions will need some re-engineering before they can be migrated.
The workflow for migrating a SQL Server database to Azure SQL Database are:
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-sql-server-
performance-best-practices
https://azure.microsoft.com/en-us/documentation/services/sql-database
1.3.1 Design a solution architecture
Azure SQL Database geo-replication includes a set of new features that improve programming and
management capabilities for business continuity and disaster recovery scenarios. These
enhancements are available for V12 databases, and they include:
For more details, please refer to Spotlight on new capabilities of SQL Database geo-replication.
To test for SQL Database compatibility issues before you start the migration process, use one of the
following methods:
Use SqlPackage: SqlPackage is a command-prompt utility will test for and, if found, generate a report
containing detected compatibility issues.
Use SQL Server Management Studio: The Export Data Tier application wizard in SQL Server
management studio will display detected errors to the screen.
If compatibility issues are detected, you must fix these compatibility issues before proceeding with
the migration.
To migrate a compatible SQL Server database, Microsoft provides several migration methods for
various scenarios. The method you choose depends upon your tolerance for downtime, the size and
complexity of your SQL Server database, and your connectivity to the Microsoft Azure cloud.
To choose your migration method, the first question to ask is can you afford to take the database out
of production during the migration. Migrating a database while active transactions are occurring can
result in database inconsistencies and possible database corruption. There are many methods to
quiesce a database, from disabling client connectivity to creating a database snapshot.
To migrate with minimal downtime, use SQL Server transaction replication if your database meets
the requirements for transactional replication. If you can afford some downtime or you are
performing a test migration of a production database for later migration, consider one of the
following three methods:
SSMS Migration Wizard: For small to medium databases, migrating a compatible SQL Server 2005 or
later database is as simple as running the Deploy Database to Microsoft Azure Database Wizard in
SQL Server Management Studio.
Export to BACPAC File and then Import from BACPAC File: If you have connectivity challenges (no
connectivity, low bandwidth, or timeout issues) and for medium to large databases, use a BACPAC
file. With this method, you export the SQL Server schema and data to a BACPAC file and then import
the BACPAC file into SQL Database using the Export Data Tier Application Wizard in SQL Server
Management Studio or the SqlPackage command-prompt utility.
Use BACPAC and BCP together: Use a BACPAC file and BCP for much large databases to achieve
greater parallelization for increases performance, albeit with greater complexity. With this method,
migrate the schema and the data separately.
Export the schema only to a BACPAC file.
Import the schema only from the BACPAC File into SQL Database.
Use BCP to extract the data into flat files and then parallel load these files into Azure SQL Database.
Understand DTU’s and know the values in the tables for single databases and elastic database pools:
Azure SQL Database provides multiple service tiers to handle different types of workloads. You can
create a single database with defined characteristics and pricing. Or you can manage multiple
databases by creating an elastic database pool. In both cases, the tiers include Basic, Standard, and
Premium. But the database options in these tiers vary based on whether you are creating an
individual database or a database within an elastic database pool. This article provides an overview
of service tiers in both contexts.
Basic Best suited for a small size database, supporting typically one single active
operation at a given time. Examples include databases used for development or
testing, or small scale infrequently used applications.
Standard The go-to option for most cloud applications, supporting multiple concurrent
queries. Examples include workgroup or web applications.
Premium Designed for high transactional volume, supporting a large number of concurrent
users and requiring the highest level of business continuity capabilities. Examples
are databases supporting mission critical applications.
NOTE:
Web and Business editions are being retired. Find out how to upgrade Web and Business editions.
Please read the Sunset FAQ if you plan to continue using Web and Business Editions.
Performance characteristics listed here apply to databases created using SQL Database V12. In
situations where the underlying hardware in Azure hosts multiple SQL databases, your database will
still get a guaranteed set of resources, and the expected performance characteristics of your
individual database is not affected.
For a better understanding of DTUs, see the DTU section in this topic.
NOTE:
For a detailed explanation of all other rows in this service tiers table, see Service tier capabilities and
limits.
In addition to creating and scaling a single database, you also have the option of managing multiple
databases within an elastic database pool. All of the databases in an elastic database pool share a
common set of resources. The performance characteristics are measured by elastic Database
Transaction Units (eDTUs). As with single databases, elastic database pools come in three service
tiers: Basic, Standard, and Premium. For elastic databases these three service tiers still define the
overall performance limits and several features.
Elastic database pools allow these databases to share and consume DTU resources without needing
to assign a specific performance level to the databases in the pool. For example, a single database in
a Standard pool can go from using 0 eDTUs to the maximum database eDTU (either 100 eDTUs
defined by the service tier or a custom number that you configure). This allows multiple databases
with varying workloads to efficiently use eDTU resources available to the entire pool.
The following table describes the characteristics of the elastic database pool service tiers.
Each database within a pool also adheres to the single-database characteristics for that tier. For
example, the Basic pool has a limit for max sessions per pool of 2400 - 28800, but an individual
database within that pool has a database limit of 300 sessions (the limit for a single Basic database as
specified in the previous section).
Understanding DTUs
The Database Transaction Unit (DTU) is the unit of measure in SQL Database that represents the
relative power of databases based on a real-world measure: the database transaction. We took a set
of operations that are typical for an online transaction processing (OLTP) request, and then
measured how many transactions could be completed per second under fully loaded conditions
(that’s the short version, you can read the gory details in the Benchmark overview).
A Basic database has 5 DTUs, which means it can complete 5 transactions per second, while a
Premium P11 database has 1750 DTUs.
The DTU for single databases translates directly to the eDTU for elastic databases. For example, a
database in a Basic elastic database pool offers up to 5 eDTUs. That’s the same performance as a
single Basic database. The difference is that the elastic database won’t consume any eDTUs from the
pool until it has to.
A simple example helps. Take a Basic elastic database pool with 1000 DTUs and drop 800 databases
in it. As long as only 200 of the 800 databases are being used at any point in time (5 DTU X 200 =
1000), you won’t hit capacity of the pool, and database performance won’t degrade. This example is
simplified for clarity. The real math is a bit more involved. The portal does the math for you, and
makes a recommendation based on historical database usage. See Price and performance
considerations for an elastic database pool to learn how the recommendations work, or to do the
math yourself.
Monitoring the performance of a SQL database starts with monitoring the resource utilization
relative to level of database performance you choose. This relevant data is exposed in the following
ways:
In the Azure Portal, you can monitor a single database’s utilization by selecting your database and
clicking the Monitoring chart. This brings up a Metric window that you can change by clicking the
Edit chart button. Add the following metrics:
CPU Percentage
DTU Percentage
Data IO Percentage
Storage Percentage
Once you’ve added these metrics, you can continue to view them in the Monitoring chart with more
details on the Metric window. All four metrics show the average utilization percentage relative to the
DTU of your database.
You can also configure alerts on the performance metrics. Click the Add alert button in the Metric
window. Follow the wizard to configure your alert. You have the option to alert if the metrics
exceeds a certain threshold or if the metric falls below a certain threshold.
For example, if you expect the workload on your database to grow, you can choose to configure an
email alert whenever your database reaches 80% on any of the performance metrics. You can use
this as an early warning to figure out when you might have to switch to the next higher performance
level.
The performance metrics can also help you determine if you are able to downgrade to a lower
performance level. Assume you are using a Standard S2 database and all performance metrics show
that the database on average does not use more than 10% at any given time. It is likely that the
database will work well in Standard S1. However, be aware of workloads that spike or fluctuate
before making the decision to move to a lower performance level.
The same metrics that are exposed in the portal are also available through system views:
sys.resource_stats in the logical master database of your server, and sys.dm_db_resource_stats in
the user database (sys.dm_db_resource_stats is created in each Basic, Standard, and Premium user
database. Web and Business edition databases return an empty result set). Use sys.resource_stats if
you need to monitor less granular data across a longer period of time. Use
sys.dm_db_resource_stats if you need to monitor more granular data within a smaller time frame.
For more information, see Azure SQL Database Performance Guidance.
For elastic database pools, you can monitor individual databases in the pool with the techniques
described in this section. But you can also monitor the pool as a whole. For information, see Monitor
and manage an elastic database pool.
https://msdn.microsoft.com/en-us/library/azure/dn741340.aspx
Use Microsoft Azure SQL Database service tiers (editions) to dial-in cloud database performance and
capabilities to suit your application.
Basic, Standard, Premium service tiers offer predictable performance, flexible business continuity
options, and streamlined billing. In addition, with multiple performance levels, you can have the
flexibility to choose the level that best meet your workload demands.
Should your workload increase or decrease, you can easily change the performance characteristics of
a database in the Microsoft Azure Management Portal. Select your database, click Scale, and then
choose a new service tier. For more information, see Changing Database Service Tiers and
Performance Levels.
The features available with each service tier fall into the following categories:
Performance and Scalability: Basic, Standard, and Premium service tiers have one or more
performance levels that offer predictable performance. Performance levels are expressed in
database throughput units (DTUs), which provide a quick way to compare the relative
performance of a database. For more detailed information about performance levels and
DTUs, see Azure SQL Database Service Tiers and Performance Levels. In addition to the
performance level, for all database service tiers, you also pick a maximum database size
supported by the service tier. For more information on the supported database sizes, see
CREATE DATABASE.
Business Continuity: These features help you recover your database from human and
application errors, or datacenter failures. Many built-in features, such as Geo-Restore, are
available with Basic, Standard, and Premium service tiers. For more information, see Azure
SQL Database Business Continuity.
Auditing: With Basic, Standard, and Premium service tiers, you can track logs and events that
occur in a database. For more information, see Azure SQL Database Performance Guidance.
SQL databases is available in Basic, Standard, and Premium service tiers. Each service tier offers
different levels of performance and capabilities to support lightweight to heavyweight database
workloads. You can build your first app on a small database for a few bucks a month, then change the
service tier manually or programmatically at any time as your app goes viral worldwide, without
downtime to your app or your customers.
For many businesses and apps, being able to create databases and dial single database performance
up or down on demand is enough, especially if usage patterns are relatively predictable. But if you
have unpredictable usage patterns, it can make it hard to manage costs and your business model.
Elastic database pools in SQL Database solve this problem. The concept is simple. You allocate
performance to a pool, and pay for the collective performance of the pool rather than single
database performance. You don’t need to dial database performance up or down. The databases in
the pool, called elastic databases, automatically scale up and down to meet demand. Elastic
databases consume but don’t exceed the limits of the pool, so your cost remains predictable even if
database usage doesn’t. What’s more, you can add and remove databases to the pool, scaling your
app from a handful of databases to thousands, all within a budget that you control.
Either way you go—single or elastic—you’re not locked in. You can blend single databases with
elastic database pools, and change the service tiers of single databases and pools to create innovate
designs. Moreover, with the power and reach of Azure, you can mix-and-match Azure services with
SQL Database to meet your unique modern app design needs, drive cost and resource efficiencies,
and unlock new business opportunities.
But how can you compare the relative performance of databases and database pools? How do you
know the right click-stop when you dial up and down? The answer is the database transaction unit
(DTU) for single databases and the elastic DTU (eDTU) for elastic databases and database pools.
https://channel9.msdn.com/Series/Windows-Azure-Storage-SQL-Database-Tutorials/Scott-Klein-
Video-02
In this episode, Scott is joined by Tobias Ternstrom – Principal Program Manager Lead for
performance in Azure SQL Database, as he breaks down what the new Database Throughput Unit is
and how you can use it to understand what kind of horsepower you can expect from the new
services tiers. The DTU is a critical part of providing a more predictable performance experience for
you, A DTU represents the power of the database engine as a blended measure of CPU, memory, and
read and write rates. This measure helps you assess the relative power of the six SQL Database
performance levels (Basic, S1, S2, P1, P2, and P3).
https://azure.microsoft.com/en-us/services/sql-data-warehouse
https://azure.microsoft.com/en-us/documentation/services/sql-data-warehouse
Channel 9 Ignite 2015 video: Microsoft Azure SQL Data Warehouse Overview
Channel 9 Ignite 2015 video: Azure SQL Data Warehouse: Deep Dive
SQL Data Warehouse presents numerous options for loading data including:
PolyBase
Azure Data Factory (Loading Azure SQL Data Warehouse with Azure Data Factory 3m40s)
BCP command-line utility (Channel 9 video: Loading data into Azure SQL Data Warehouse with BCP 3 minutes)
SQL Server Integration Services (SSIS)
3rd party data loading tools
While all of the above methods can be used with SQL Data Warehouse, PolyBase's ability to
transparently parallelize loads from Azure Blob Storage will make it the fastest tool for loading data.
Check out the article “Load data into SQL Data Warehouse” to learn more about how to load with
PolyBase and get some guidance on initial data loading.
Server-level. These actions include server operations, such as management changes and
logon and logoff operations.
Database-level. These actions encompass data manipulation languages (DML) and data
definition language (DDL) operations.
Audit-level. These actions include actions in the auditing process.
Note
Note
This event is raised for any database permission change (GDR) for any
database in the server. Equivalent to the Audit Database Scope GDR
Event Class.
DATABASE_PRINCIPAL_CHANGE_GROUP This event is raised when principals, such as users, are created,
altered, or dropped from a database. Equivalent to the Audit
Database Principal Management Event Class. (Also equivalent to the
Audit Add DB Principal Event Class, which occurs on the deprecated
sp_grantdbaccess, sp_revokedbaccess, sp_addPrincipal, and
sp_dropPrincipal stored procedures.)
Database-Level Audit Action Groups are actions similar to SQL Server Security Audit Event classes.
For more information about event classes, see SQL Server Event Class Reference.
The following table describes the database-level audit action groups and provides their equivalent
SQL Server Event Class where applicable.
Database-level actions support the auditing of specific actions directly on database schema and
schema objects, such as Tables, Views, Stored Procedures, Functions, Extended Stored Procedures,
Queues, Synonyms. Types, XML Schema Collection, Database, and Schema are not audited. The audit
of schema objects may be configured on Schema and Database, which means that events on all
schema objects contained by the specified schema or database will be audited.
Action Description
SELECT This event is raised whenever a SELECT is issued.
UPDATE This event is raised whenever an UPDATE is issued.
INSERT This event is raised whenever an INSERT is issued.
DELETE This event is raised whenever a DELETE is issued.
EXECUTE This event is raised whenever an EXECUTE is issued.
RECEIVE This event is raised whenever a RECEIVE is issued.
REFERENCES This event is raised whenever a REFERENCES permission is checked
You can also audit the actions in the auditing process. This can be in the server scope or the database
scope. In the database scope, it only occurs for database audit specifications.
1. In Object Explorer, expand the database where you want to create an audit
specification.
2. Expand the Security folder.
3. Right-click the Database Audit Specifications folder and select New Database Audit
Specification….
The following options are available on the Create Database Audit Specification dialog
box.
Name
The name of the database audit specification. This is generated automatically when
you create a new server audit specification but is editable.
Audit
The name of an existing database audit. Either type in the name of the audit or select
it from the list.
Specifies the database-level audit action groups and audit actions to capture. For the
list of database-level audit action groups and audit actions and a description of the
events they contain, see SQL Server Audit Action Groups and Actions.
Object Schema
Object Name
The name of the object to audit. This is only available for audit actions; it does not
apply to audit groups.
Ellipsis (…)
Opens the Select Objects dialog to browse for and select an available object, based
on the specified Audit Action Type.
Principal Name
The account to filter the audit by for the object being audited.
Ellipsis (…)
Opens the Select Objects dialog to browse for and select an available object, based
on the specified Object Name.
USE master ;
GO
-- Create the server audit.
CREATE SERVER AUDIT Payrole_Security_Audit
TO FILE ( FILEPATH =
'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA' ) ;
GO
-- Enable the server audit.
ALTER SERVER AUDIT Payrole_Security_Audit
WITH (STATE = ON) ;
To create a database-level audit specification
USE AdventureWorks2012 ;
GO
-- Create the database audit specification.
CREATE DATABASE AUDIT SPECIFICATION Audit_Pay_Tables
FOR SERVER AUDIT Payrole_Security_Audit
ADD (SELECT , INSERT
ON HumanResources.EmployeePayHistory BY dbo )
WITH (STATE = ON) ;
GO
For more information, see CREATE SERVER AUDIT (Transact-SQL) and CREATE DATABASE
AUDIT SPECIFICATION (Transact-SQL).
Microsoft Azure SQL Database provides a relational database service for Azure and other Internet-
based applications. To help protect your data, the SQL Database firewall prevents all access to your
SQL Database server until you specify which computers have permission. The database firewall
grants access based on the originating IP address of each request.
To configure your database firewall, you create firewall rules that specify ranges of acceptable IP
addresses. You can create firewall rules at the server and database levels.
Server-level firewall rules: These rules enable clients to access your entire Azure SQL
Database server, that is, all the databases within the same logical server. These rules are
stored in the master database.
Database-level firewall rules: These rules enable clients to access individual databases
within your Azure SQL Database server. These rules are created per database and are stored
in the individual databases (including master). These rules can be helpful in restricting access
to certain (secure) databases within the same logical server.
Where you manage The Security folder in SQL Server The master database and through the Azure portal
server-level security Management Studio's Object Explorer
Server-level security role securityadmin fixed server role loginmanager database role in the master database
for creating logins
Commands for managing CREATE LOGIN, ALTER LOGIN, DROP CREATE LOGIN, ALTER LOGIN, DROP LOGIN (There are
logins LOGIN some parameter limitations and you must be connected
to the master database.)
View that shows all sys.server_principals sys.sql_logins (You must be connected to the master
logins database.)
Server-level role for dbcreator fixed database role dbmanager database role in the master database
creating databases
Command for creating a CREATE DATABASE CREATE DATABASE (There are some parameter
database limitations and you must be connected to the master
database.)
View that lists all sys.databases sys.databases (You must be connected to the master
databases database.)
By implementing the auditing feature in SQL Database, you can retain your audit trail over time, as
well as analyze reports showing database activity of success or failure conditions for the following
predefined events:
Plain SQL
Parameterized SQL
Stored procedure
Logins
Transaction management
Row-Level Security enables customers to control access to rows in a database table based on the
characteristics of the user executing a query (e.g., group membership or execution context).
Row-Level Security (RLS) simplifies the design and coding of security in your application. RLS enables
you to implement restrictions on data row access. For example ensuring that workers can access only
those data rows that are pertinent to their department, or restricting a customer's data access to
only the data relevant to their company.
The access restriction logic is located in the database tier rather than away from the data in another
application tier. The database system applies the access restrictions every time that data access is
attempted from any tier. This makes your security system more reliable and robust by reducing the
surface area of your security system.
Implement RLS by using the CREATE SECURITY POLICY Transact-SQL statement, and predicates
created as inline table valued functions.
Encrypting data in transit. SQL Database connections are encrypted using TLS/SSL for the Tabular
Data Stream (TDS) transfer of data. In fact, v12 now supports the strongest version of Transport
Layer Security (TLS) 1.2 when connecting with the latest versions of the ADO.Net (4.6), JDBC (4.2) or
ODBC [??]. Support for ODBC on Linux, PHP, and node.js is coming soon. For Azure SQL Database
Microsoft provides a valid certificate for the TLS connection. For increased security and to eliminate
the possibility of “man-in-the-middle” attacks, do the following for each of the different drivers:
Setting Encrypt=True will assure the client is using a connection that is encrypted. Setting
TrustServerCertificate=False ensures that the client will verify the certificate before accepting the
connection.
Privileged SQL users: These SQL users always have access to unmasked data.
Masking function: This set of methods controls access to data for different scenarios.
Masking rules: This set of rules defines the fields to mask and the masking function.
Important: Dynamic Data Masking does not protect against brute force attacks of the data from a
malicious administrator.
To implement DDM, you need to open the Dynamic Data Masking setings for your database in the
Azure Management Portal, as shown in Figure 5. Here you can add masking rules to apply to your
data. For example, you can select an existing masking field format for credit card numbers, Social
Security Numbers, or email, among others, or you can create a custom format.
You can make use of Masking Recommendations to easily discover potentially sensitive fields in your
database that you would like to mask. Adding masking rules from this list of recommendations is as
easy as clicking on ‘add’ for each relevant mask and saving the DDM settings.
2.2.8 Configure Always Encrypted
Always Encrypted, which introduces a set of client libraries to allow operations on encrypted data
transparently inside of an application. With the introduction of Always Encrypted, Microsoft
simplifies the process of encrypting your data, as the data is transparently encrypted at the client and
stays encrypted throughout the rest of the application stack.
Since this security is performed by an ADO.NET client library, minimal changes are needed for an
existing application to use Always Encrypted. This allows encryption to be easily configured at the
application layer and data to be encrypted at all layers of the application. Always Encrypted has the
following characteristics:
The key is always under control of the client and application, and is never on the server.
Neither server nor database administrators can recover data in plain text.
Encrypted columns of data are never sent to the server as plain text.
Limited query operations on encrypted data are possible.
With Always Encrypted, data stays encrypted whether at rest or in motion. The encryption key
remains inside the application in a trusted environment, thereby reducing the surface area for attack
and simplifying implementation. To learn more about and get a first hand introduction of how to
protect sensitive data with Always Encrypted refer to the Always Encrypted blog.
3 Design for high availability, disaster recovery, and scalability (25–
30%)
3.1 Design and implement high availability solutions
3.1.1 Design a high availability solution topology
High Availability Solutions (SQL Server): https://msdn.microsoft.com/en-us/library/ms190202.aspx
As part of the SQL Server AlwaysOn offering, AlwaysOn Failover Cluster Instances leverages Windows
Server Failover Clustering (WSFC) functionality to provide local high availability through redundancy
at the server-instance level—a failover cluster instance (FCI). An FCI is a single instance of SQL Server
that is installed across Windows Server Failover Clustering (WSFC) nodes and, possibly, across
multiple subnets. On the network, an FCI appears to be an instance of SQL Server running on a single
computer, but the FCI provides failover from one WSFC node to another if the current node becomes
unavailable. For more information, see AlwaysOn Failover Cluster Instances (SQL Server).
Database mirroring
This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in
new development work, and plan to modify applications that currently use this feature. We
recommend that you use AlwaysOn Availability Groups instead.
Log shipping
Like AlwaysOn Availability Groups and database mirroring, log shipping operates at the database
level. You can use log shipping to maintain one or more warm standby databases (referred to as
secondary databases) for a single production database that is referred to as the primary database.
For more information about log shipping, see About Log Shipping (SQL Server).
3.1.2 Implement high availability solutions between on-premises and Azure
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-sql-server-high-
availability-and-disaster-recovery-solutions
AlwaysOn Some availability replicas running in Azure VMs and other replicas running on-premises
Availability for cross-site disaster recovery. The production site can be either on-premises or in an
Groups Azure datacenter.
Because all availability replicas must be in the same WSFC cluster, the WSFC cluster
must span both networks (a multi-subnet WSFC cluster). This configuration requires a
VPN connection between Azure and the on-premises network.
For successful disaster recovery of your databases, you should also install a replica
domain controller at the disaster recovery site.
It is possible to use the Add Replica Wizard in SSMS to add an Azure replica to an
existing AlwaysOn Availability Group. For more information, see Tutorial: Extend your
AlwaysOn Availability Group to Azure.
Database One partner running in an Azure VM and the other running on-premises for cross-site
Mirroring disaster recovery using server certificates. Partners do not need to be in the same Active
Directory domain, and no VPN connection is required.
Technology Example Architectures
Another database mirroring sceanario involves one partner running in an Azure VM and
the other running on-premises in the same Active Directory domain for cross-site
disaster recovery. A VPN connection between the Azure virtual network and the on-
premises network is required.
For successful disaster recovery of your databases, you should also install a replica
domain controller at the disaster recovery site.
Log Shipping One server running in an Azure VM and the other running on-premises for cross-site
disaster recovery. Log shipping depends on Windows file sharing, so a VPN connection
between the Azure virtual network and the on-premises network is required.
For successful disaster recovery of your databases, you should also install a replica
domain controller at the disaster recovery site.
Backup and On-premises production databases backed up directly to Azure blob storage for disaster
Restore with recovery.
Azure Blob
Storage
Service
For more information, see Backup and Restore for SQL Server in Azure Virtual Machines.
3.1.3 Design cloud-based backup solutions
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-sql-server-backup-and-
restore/
The sections below include information specific to the different versions of SQL Server supported in
an Azure virtual machine.
Backup Considerations When Database Files are Stored in the Microsoft Azure Blob service
The reasons for the performing database backups and the underlying backup technology itself
changes when your database files are stored in Microsoft Azure Blob storage. For more information
on storing database files in Azure blob storage, see SQL Server Data Files in Azure.
You no longer need to perform database backups to provide protection against hardware or media
failure because Microsoft Azure provides this protection as part of the Microsoft Azure service.
You still need to perform database backups to provide protection against user errors, or for archival
purposes, regulatory reasons, or administrative purposes.
You can perform nearly instantaneous backups and rapid restores using the SQL Server File-Snapshot
Backup feature in Microsoft SQL Server 2016 Community Technology Preview 3 (CTP3). For more
information, see File-Snapshot Backups for Database Files in Azure.
Backup and Restore in Microsoft SQL Server 2016 Community Technology Preview 3 (CTP3)
Microsoft SQL Server 2016 Community Technology Preview 3 (CTP3) supports the backup and restore
with Azure blobs features found in SQL Server 2014 and described below. But it also includes the
following enhancements:
Striping: When backing up to Microsoft Azure blob storage, SQL Server 2016 supports backing up to
multiple blobs to enable backing up large databases, up to a maximum of 12.8 TB.
Snapshot Backup: Through the use of Azure snapshots, SQL Server File-Snapshot Backup provides
nearly instantaneous backups and rapid restores for database files stored using the Azure Blob
storage service. This capability enables you to simplify your backup and restore policies. File-
snapshot backup also supports point in time restore. For more information, see Snapshot Backups for
Database Files in Azure.
Managed Backup Scheduling: SQL Server Managed Backup to Azure now supports custom schedules.
For more information, see SQL Server Managed Backup to Microsoft Azure.
NOTE:
For a tutorial of the capabilities of SQL Server 2016 when using Azure Blob storage, see Tutorial:
Using the Microsoft Azure Blob storage service with SQL Server 2016 databases.
For detailed information on SQL Server Backup and Restore in SQL Server 2012, see Backup and
Restore of SQL Server Databases (SQL Server 2012).
Starting in SQL Server 2012 SP1 Cumulative Update 2, you can back up to and restore from the Azure
Blob Storage service. This enhancement can be used to backup SQL Server databases on a SQL Server
running on an Azure Virtual Machine or an on-premises instance. For more information, see SQL
Server Backup and Restore with Azure Blob Storage Service.
Some of the benefits of using the Azure Blob storage service include the ability to bypass the 16 disk
limit for attached disks, ease of management, the direct availability of the backup file to another
instance of SQL Server instance running on an Azure virtual machine, or an on-premises instances for
migration or disaster recovery purposes. For a full list of benefits to using an Azure blob storage
service for SQL Server backups, see the Benefits section in SQL Server Backup and Restore with Azure
Blob Storage Service.
For Best Practice recommendations and troubleshooting information, see Backup and Restore Best
Practices (Azure Blob Storage Service).
Backup and Restore in other versions of SQL Server supported in an Azure Virtual Machine
For SQL Server Backup and Restore in SQL Server 2008 R2, see Backing up and Restoring Databases in
SQL Server (SQL Server 2008 R2).
For SQL Server Backup and Restore in SQL Server 2008, see Backing up and Restoring Databases in
SQL Server (SQL Server 2008).
Next Steps
If you are still planning your deployment of SQL Server in an Azure VM, you can find provisioning
guidance in the following tutorial: Provisioning a SQL Server Virtual Machine on Azure.
Although backup and restore can be used to migrate your data, there are potentially easier data
migration paths to SQL Server on an Azure VM. For a full discussion of migration options and
recommendations, see Migrating a Database to SQL Server on an Azure VM.
Review other resources for running SQL Server in Azure Virtual Machines.
To discuss the business continuity solutions there are several concepts you need be familiar with.
Disaster recovery (DR): a process of restoring the normal business function of the application
Estimated Recovery Time (ERT): The estimated duration for the database to be fully available after a
restore or failover request.
Recovery time objective (RTO): maximum acceptable time before the application fully recovers after
the disruptive event. RTO measures the maximum loss of availability during the failures.
Recovery point objective (RPO): maximum amount of last updates (time interval) the application can
lose by the moment it fully recovers after the disruptive event. RPO measures the maximum loss of
data during the failures.
The following table shows the differences of the business continuity features across the service tiers:
Point In Time Restore Any restore point within 7 days Any restore point within 14 days Any restore point within 35 days
Geo-Restore ERT < 12h, RPO < 1h ERT < 12h, RPO < 1h ERT < 12h, RPO < 1h
Standard Geo-Replication not included ERT < 30s, RPO < 5s ERT < 30s, RPO < 5s
Active Geo-Replication not included not included ERT < 30s, RPO < 5s
Backup and restore operations occur within the context of a recovery model. A recovery model is a
database property that controls how the transaction log is managed. Also, the recovery model of a
database determines what types of backups and what restore scenarios are supported for the
database. Typically, a database uses either the simple recovery model or the full recovery model. The
full recovery model can be supplemented by switching to the bulk-logged recovery model before
bulk operations. For an introduction to these recovery models and how they affect transaction log
management, see The Transaction Log (SQL Server).
The best choice of recovery model for the database depends on your business requirements. To
avoid transaction log management and simplify backup and restore, use the simple recovery model.
To minimize work-loss exposure, at the cost of administrative overhead, use the full recovery model.
For information about the effect of recovery models on backup and restore, see Backup Overview
(SQL Server).
After you have selected a recovery model that meets your business requirements for a specific
database, you have to plan and implement a corresponding backup strategy. The optimal backup
strategy depends on a variety of factors, of which the following are especially significant:
If there is a predictable off-peak period, we recommend that you schedule full database
backups for that period.
o Under the simple recovery model, consider scheduling differential backups between
full database backups. A differential backup captures only the changes since the last
full database backup.
o Under the full recovery model, you should schedule frequent log backups. Scheduling
differential backups between full backups can reduce restore time by reducing the
number of log backups you have to restore after restoring the data.
Are changes likely to occur in only a small part of the database or in a large part of the
database?
For a large database in which changes are concentrated in a part of the files or filegroups,
partial backups and or file backups can be useful. For more information, see Partial Backups
(SQL Server) and Full File Backups (SQL Server).
For more information, see Estimate the Size of a Full Database Backup, later in this section.
Before you implement a backup and restore strategy, you should estimate how much disk space a
full database backup will use. The backup operation copies the data in the database to the backup
file. The backup contains only the actual data in the database and not any unused space. Therefore,
the backup is usually smaller than the database itself. You can estimate the size of a full database
backup by using the sp_spaceused system stored procedure. For more information, see
sp_spaceused (Transact-SQL).
Schedule Backups
Performing a backup operation has minimal effect on transactions that are running; therefore,
backup operations can be run during regular operations. You can perform a SQL Server backup with
minimal effect on production workloads.
Note
For information about concurrency restrictions during backup, see Backup Overview (SQL Server).
After you decide what types of backups you require and how frequently you have to perform each
type, we recommend that you schedule regular backups as part of a database maintenance plan for
the database. For information about maintenance plans and how to create them for database
backups and log backups, see Use the Maintenance Plan Wizard.
You do not have a restore strategy until you have tested your backups. It is very important to
thoroughly test your backup strategy for each of your databases by restoring a copy of the database
onto a test system. You must test restoring every type of backup that you intend to use.
We recommend that you maintain an operations manual for each database. This operations manual
should document the location of the backups, backup device names (if any), and the amount of time
that is required to restore the test backups.
3.2 Design and implement scalable solutions
3.2.1 Design a scale-out solution
Scaling Out SQL Server
https://msdn.microsoft.com/en-us/library/aa479364.aspx
Configure a Native Mode Report Server Scale-Out Deployment (SSRS Configuration Manager)
https://msdn.microsoft.com/en-us/library/ms159114.aspx
Microsoft SQL provides multi-master replication through peer-to-peer replication. It provides a scale-
out and high-availability solution by maintaining copies of data across multiple nodes. Built on the
foundation of transactional replication, peer-to-peer replication propagates transactionally
consistent changes in near real-time.
https://azure.microsoft.com/en-us/documentation/articles/sql-database-business-continuity/
The following table shows the differences of the business continuity features across the service tiers:
Point In Time Any restore point Any restore point Any restore point
Restore within 7 days within 14 days within 35 days
Geo-Restore ERT < 12h, RPO < 1h ERT < 12h, RPO < 1h ERT < 12h, RPO < 1h
Standard Geo- not included ERT < 30s, RPO < 5s ERT < 30s, RPO < 5s
Replication
Active Geo- not included not included ERT < 30s, RPO < 5s
Replication
https://azure.microsoft.com/en-us/documentation/articles/sql-database-business-continuity-design
Designing your application for business continuity requires you to answer the following questions:
1. Which business continuity feature is appropriate for protecting my application from outages?
2. What level of redundancy and replication topology do I use?
SQL Database provides a built-in basic protection of every database by default. It is done by storing
the database backups in the geo-redundant Azure storage (GRS). If you choose this method, no
special configuration or additional resource allocation is necessary. With these backups, you can
recover your database in any region using the Geo-Restore command. Use Recover from an outage
section for the details of using geo-restore to recover your application.
You should use the built-in protection if your application meets the following criteria:
1. It is not considered mission critical. It doesn't have a binding SLA therefore the downtime of 24 hours
or longer will not result in financial liability.
2. The rate of data change is low (e.g. transactions per hour). The RPO of 1 hour will not result in a
massive data loss.
3. The application is cost sensitive and cannot justify the additional cost of Geo-Replication
NOTE: Geo-Restore does not pre-allocate the compute capacity in any particular region to restore
active databases from the backup during the outage. The service will manage the workload
associated with the geo-restore requests in a manner that minimizes the impact on the existing
databases in that region and their capacity demands will have priority. Therefore, the recovery time
of your database will depend on how many other databases will be recovering in the same region at
the same time.
Geo-Replication creates a replica database (secondary) in a different region from your primary. It
guarantees that your database will have the necessary data and compute resources to support the
application's workload after the recovery. Refer to Recover from an outage section for using failover
to recover your application.
You should use the Geo-Replication if your application meets the following criteria:
1. It is mission critical. It has a binding SLA with aggressive RPO and RTO. Loss of data and availability
will result in financial liability.
2. The rate of data change is high (e.g. transactions per minute or seconds). The RPO of 1 hr associated
with the default protection will likely result in unacceptable data loss.
3. The cost associated with using Geo-Replication is significantly lower than the potential financial
liability and associated loss of business.
NOTE: If your application uses Basic tier database(s) Geo-Repliation is not supported
Standard tier databases do not have the option of using Active Geo-Replication so if your application
uses standard databases and meets the above criteria it should enable Standard Geo-Replication.
Premium databases on the other hand can choose either option. Standard Geo-Replication has been
designed as a simpler and less expensive disaster recovery solution, particularly suited to applications
that use it only to protect from unplanned events such as outages. With Standard Geo-Replication
you can only use the DR paired region for the recovery and can create only one secondary for each
primary. An additional secondary may be necessary for the application upgrade scenario. So if this
scenario is critical for your application you should enable Active Geo-Replication instead. Please refer
to Upgrade application without downtime for additional details.
NOTE: Active Geo-Replication also supports read-only access to the secondary database thus
providing additional capacity for the read-only workloads.
You can enable Geo-Replication using Azure Classic Portal or by calling REST API or PowerShell
command.
Azure Classic Portal
The DR paired region on the Geo-Replication blade will be marked as recommended. If you use a
Premium tier database you can choose a different region. If you are using a Standard database you
cannot change it. The Premium database will have a choice of the secondary type (Readable or Non-
readable). Standard database can only select a Non-readable secondary.
PowerShell
Copy
$database = Get-AzureRmSqlDatabase –DatabaseName "mydb"
$secondaryLink = $database | New-AzureRmSqlDatabaseSecondary –PartnerResourceGroupName "rg2" –
PartnerServerName "srv2" -AllowConnections "None"
Copy
This API is asynchronous. After it returns use the Get Replication Link API to check the status of this
operation. The replicationState field of the response body will have the value CATCHUP when the
operation is completed.
When designing your application for business continuity you should consider several configuration
options. The choice will depend on the application deployment topology and what parts of your
applications are most vulnerable to an outage. Please refer to Designing Cloud Solutions for Disaster
Recovery Using Geo-Replication for guidance.
To perform monitoring tasks with SQL Trace by using Transact-SQL stored procedures
To open traces and configure how traces are displayed by using SQL Server Profiler
To create, modify, and use trace templates by using SQL Server Profiler
To use SQL Server Profiler traces to collect and monitor server performance
4.1.2 Monitor using dynamic management views (DMVs) and dynamic management
functions (DMFs)
Dynamic management views and functions return server state information that can be used to
monitor the health of a server instance, diagnose problems, and tune performance.
Server-scoped dynamic management views and functions. These require VIEW SERVER STATE
permission on the server.
Database-scoped dynamic management views and functions. These require VIEW DATABASE
STATE permission on the database.
The article Dynamic Management Views and Functions (Transact-SQL) explains DMVs and DMFs and
gives several examples.
https://www.microsoft.com/en-us/download/details.aspx?id=38829
The Microsoft Windows Azure SQL Database Management Pack enables you to monitor the
availability and performance of applications that are running on Windows Azure SQL Database.
Feature Summary
After configuration, the Microsoft Windows Azure SQL Database Monitoring Management Pack
offers the following functionalities:
https://azure.microsoft.com/en-in/documentation/articles/sql-database-troubleshoot-performance
https://azure.microsoft.com/en-in/documentation/articles/sql-database-monitoring-with-dmvs/
Microsoft Azure SQL Database enables a subset of dynamic management views to diagnose
performance problems, which might be caused by blocked or long-running queries, resource
bottlenecks, poor query plans, and so on. This topic provides information on how to detect common
performance problems by using dynamic management views.
For detailed information on dynamic management views, see Dynamic Management Views and
Functions (Transact-SQL) in SQL Server Books Online.
For detailed information on dynamic management views, see Dynamic Management Views and
Functions (Transact-SQL) in SQL Server Books Online. (This is the same article as mentioned in 4.1.2)
In SQL Database, querying a dynamic management view requires VIEW DATABASE STATE
permissions. The VIEW DATABASE STATE permission returns information about all objects within the
current database. To grant the VIEW DATABASE STATE permission to a specific database user, run the
following query:
In an instance of on-premises SQL Server, dynamic management views return server state
information. In SQL Database, they return information regarding your current logical database only.
The information above is from the article SQL Database Monitoring with Dynamic Management
Views. Read this article and also go through the examples mentioned in this article:
Add-AzureRmAccount
https://azure.microsoft.com/en-us/blog/azure-automation-your-sql-agent-in-the-cloud/
http://davidjrh.intelequia.com/2015/10/rebuilding-sql-database-indexes-using.html