Sunteți pe pagina 1din 20

SQL Server Partitioning Cheat Sheet

Created By: Ramu Poonjolai

April 02, 2014


Table of Content

Introduction ........................................................................................................................................................................ 3
How to create partition table ............................................................................................................................................ 3
Create Partition Function ................................................................................................................................................... 3
Create Partition Scheme .................................................................................................................................................... 3
Partition function to view the row count for each partition ............................................................................................ 4
SPLIT (or) MERGE the partition .......................................................................................................................................... 4
SPLIT partition ................................................................................................................................................................. 4
Merge partition ............................................................................................................................................................... 4
DROP partition function ..................................................................................................................................................... 5
SWITCH the Partition ......................................................................................................................................................... 5
Prerequisites for Switching Partitions ............................................................................................................................. 5
SWITCH the Partition to non-partition table .................................................................................................................. 6
SWITCH the non-Partition to Partition table .................................................................................................................. 7
Best Practices...................................................................................................................................................................... 7
Tips, Tricks and additional points ...................................................................................................................................... 7
Advantages of Partitioning ................................................................................................................................................ 8
Find Metadata about PARTITION Functions/Schemes ..................................................................................................... 8
Catalog views .................................................................................................................................................................. 8
$PARTITION System function .......................................................................................................................................... 9
Partitioned table maintenance .......................................................................................................................................... 9
1. INDEX fragmentation and defrag tips ....................................................................................................................... 10
2. Partition filegroup backups ....................................................................................................................................... 10
3. Database consistency checks .................................................................................................................................... 11
Appendix A: Script to create RANGE RIGHT Partition table ........................................................................................... 11
Appendix B: Script to create RANGE LEFT Partition table............................................................................................... 13
Appendix C: Sample scripts .............................................................................................................................................. 15
Sample scripts used in RANGE RIGHT demo ................................................................................................................. 15
Sample scripts used in RANGE LEFT demo .................................................................................................................... 16
Sample script to find the expensive queries ................................................................................................................. 18
SQL Server hash partitioning......................................................................................................................................... 19
Introduction

We generally partition the very large databases for better manageability and performance.

How to create partition table

Here are the steps for partitioning the table:


a) Create the filegroups on the database
b) Create the files for the specified filegroups
c) Create the partition function
d) Create the partition scheme based on the partition function
e) Create the table based on partition scheme.

Create Partition Function

Following script will create partition function. It will create 3 boundaries.

USE <database_name>
GO

DECLARE @boundary1 DATETIME


DECLARE @boundary2 DATETIME
DECLARE @boundary3 DATETIME
SELECT @boundary1 = '20120101', @boundary2 = '20130101', @boundary3 = '20140101'

CREATE PARTITION FUNCTION p_func_right(DATETIME) AS


RANGE RIGHT FOR VALUES (@boundary1, @boundary2, @boundary3)

This example create 3 boundaries, which means 4 ranges. We need to have 4 filegroups in database.
Since we said RANGE RIGHT, Boundary values will reside in right side of the partition. If we specify RANGE LEFT, then
boundary will be on left side of the partition.

If you specify '20120101', it means '20120101 00:00:00.000'. If you specify '20121231 23:59:59.998', it is
1 millisecond less than that '20120101 00:00:00.000'.

Create Partition Scheme


Following script will create partition scheme

USE <database_name>
GO

CREATE PARTITION SCHEME p_scheme_right AS


PARTITION p_func_right TO
([FGL2000],
[FGL2012],
[FGL2013],
[FGL2014])

Partition scheme name is “p_scheme_right”. It is based on partition function “p_func_right”.


Partition function to view the row count for each partition

Following SQL Statement will show the count of rows in each partition of the table.

SELECT $PARTITION.pFunc(create_dt) AS PARTITIONID,


COUNT(* ) AS ROW_COUNT
FROM DBO.Orders
GROUP BY $PARTITION.pFunc(create_dt)
ORDER BY PARTITIONID

You could retrieve all records for a particular partition identifier using the following syntax, again using $PARTITION
function:

SELECT * FROM DBO.Orders


WHERE $PARTITION.pFunc(create_dt) = 2

SPLIT (or) MERGE the partition

As times goes by, you may have to slide the window and create new partitions for better performance. You may have to
split the current partition. Sometimes you may have to merge two partitions and remove the file from certain LUN.

Here is the syntax for split/merge a partition:


ALTER PARTITION FUNCTION partition_function_name() SPLIT RANGE (boundary value) OR MERGE RANGE (boundary value)

SPLIT partition
To split the range, you have to first instruct SQL Server which filegroup should hold data and then alter the partition-
function.

You may have to create new file group and add a file in that filegroup for the database, before the split.

First execute this command…


ALTER PARTITION SCHEME pScheme NEXT USED [FileGroupName]

Then, execute this command...


ALTER PARTITION FUNCTION pFunc() SPLIT RANGE ('20150101')

Note: Please add the “FileGroupName” filegroup to database before altering the partition scheme. Use the following
command:
ALTER DATABASE PartitionTest ADD FILEGROUP FGL2015

Since the function is RANGE RIGHT, the new filegroup will be added to right. If the RANGE is LEFT, then the new filegroup
will be added left. The thumbrule to remember is, whenever you split the range, new filegroup will be added where the
boundary value is going to be.

Note: If you skip the FILEGROUPNAME after the “NEXT USED” in ALTER PARTITION SCHEME syntax, then it will use the
default filegroup specified for the database.

Merge partition

Merge will merge the 2 partitions into 1 partition, in other words it moves the data from one partition to another and
delete the former.
Here is the syntax:
ALTER PARTITION FUNCTION p_func_left() MERGE RANGE ('20120101')

Note: If the RANGE is LEFT, then LEFT partition will be merged with RIGHT partition because LEFT partition will have the
boundary value. For RANGE RIGHT, it is vice versa.

DROP partition function

DROP PARTITION FUNCTION partition_function_name

You must drop all partition schemes referencing the partition function before you can drop the function

SWITCH the Partition

To delete/load the data from partition faster, you need to switch the partitions.
To delete the data from a partition, simply switch the partition to non-partitioned table. Similarly, to load the data into
particular partition, you can switch non-partition table to that partition. Within a second, data will be switched.

Prerequisites for Switching Partitions

In order to switch partitions between two tables, a following list of pre-requisites must be satisfied:

1. Both tables participating in partition switching must exist before ALTER TABLE statement is executed.
2. If you are switching partitions between two partitioned tables, they must be partitioned using the same column.
However, two tables don't have to use the same partition function or the same partition scheme.
3. If you are switching partitions between two partitioned tables, then the destination partition must exist before
ALTER TABLE statement is executed. The destination partition must be empty.
4. If you are switching partitions between two partitioned tables, then switched partitions must reside on the same file
group. Furthermore, if you have any large value columns (these include columns with VARCHAR(MAX),
NVARCHAR(MAX), VARBINARY(MAX), TEXT, NTEXT, and IMAGE data type) in these tables all such columns must
reside in the same filegroup.
5. Indexes of the partitioned table must reside in the same filegroup as the partition you are switching. You may have
to drop any non-aligned indexes before you switch.
6. You can switch partitions into a non-partitioned table. This non-partitioned table must reside on the same filegroup
as the partition that is switched into it.

7. Both tables must have the same schema. This means:


• All columns existing in the source table must also exist in the destination table and have identical data types and
null-ability.
• Tables do not have to have primary keys. However, if source table has a primary key, then the destination table
must have a primary key on the same column(s).
• Identity columns are not considered when comparing source and destination tables for partition switching. This
means you can have different identity seed and increment on source and destination identity columns.
• If tables have any computed columns, they must have identical definitions and such columns must be persisted.
• If source table has a ROWGUID column, then destination table must also have a ROWGUID column.
• If you are using text-in-row option for any columns, the setting must be the same in source and destination
tables.
• If any columns use XML data type, corresponding columns in the other table must be typed to the same XML
schema collection.
• Tables don't have to have clustered indexes. However, if the source table has the clustered index, then the
destination must also have an identical clustered index. This includes key columns, their ordering, uniqueness
and any other non-default index settings. If the clustered index is disabled the partition switching cannot take
place.
• Tables don't have to have non-clustered indexes. However, if the source table has any non-clustered indexes
then destination table must also have identical non-clustered indexes. This includes key columns, their ordering,
included columns and index uniqueness. Non-clustered indexes can be disabled to be excluded from these
requirements.

8. Source and destination tables must have identical constraints or the source table constraint must be restricting a
subset of the corresponding constraint on the target table. For example, if the target table has a constraint FullDate
BETWEEN '1/1/2003' and '05/11/2003' then source table must constraint the same column between January 1st and
May 11th of 2003; so constraint FullDate BETWEEN '3/1/2003' and '4/1/2003' would be acceptable.
9. If you are switching partitions between two partitioned tables and the two tables use different partition schemes,
then the values of the partitioning column in the switched partition must be within the boundaries of a partition
defined on the destination table. For example, if we have values between 10 and 1000 in the source partition we
cannot switch to a partition whose lower boundary is 1001 or greater. The destination partition boundaries must
include all values that exist in the source partition.
10. If the destination table has any FOREIGN KEY constraints the source table must also have FOREIGN KEY constraints
defined on corresponding columns and must reference the same tables as the destination.
11. All indexes on the partitioned tables participating in partition switching must be aligned.
12. No full-text indexes are allowed either table participating in partition switching.
13. No XML indexes are allowed on the destination table.
14. Destination table cannot reference the source table using FOREIGN KEY constraint.
15. No table can reference the source table using FOREIGN KEY constraint. Source table is allowed to reference other
tables, however.
16. If indexed views are defined on either source or destination tables, then partition switching will not work. You must
drop the indexed view first before switching partitions in and out of the partitioned table. This limitation will be
lifted in the future releases of software.
17. No rules can be bound to either source or destination table.
18. Neither source nor destination table can be published for replication.

When you execute ALTER TABLE...SWITCH statement SQL Server acquires a schema modification lock on both tables.
However, the locks are held for a very short period of time because only meta-data must be updated.

Note: Source is toadworld.com

SWITCH the Partition to non-partition table

In sliding window technique, you might want to archive old data into non-partition table. Also, sometimes you may want
to purge the data. In these scenarious, you may want to switch the data in a partition to non-partition table. Since this is
metadata operation, switch happens less than a second. The non-partitioned table should be in the same file group as
partition. Non-partition table should have the same structure of partitioned table.

To switch the data from partition 4 into the Orders_OLD table, execute the following command:
ALTER TABLE Orders SWITCH PARTITION 4 TO Orders_OLD;

Note:
If you are switching a partition of partitioned-table to another partitioned-table, then use the following command:
ALTER TABLE Orders_partitioned SWITCH PARTITION 4 TO OrderDetails_partitioned PARTITION 4

Make sure to put the check constraint on the table in case you plan to switch back this data to partitioned table. It is
better to define the check constraint before creating the table. Here is the syntax for the check constraint. Since we are
switching the partition that contains the records with date <= ’01-01-2011’, it should be defined as same for non-
partitioned table.

ALTER TABLE [dbo].orders_left_old WITH NOCHECK ADD CONSTRAINT [chk_date_constraint] CHECK (create_dt >= '20100101' AND
create_dt <= '20110101')
GO
ALTER TABLE [dbo].orders_left_old WITH CHECK CHECK CONSTRAINT [chk_date_constraint]
GO

SWITCH the non-Partition to Partition table

If we are moving the data from non-partitioned table to partitioned table, then you need to use SWITCH option to move
the data. Non-partitioned table should be in the same filegroup as partitioned table. Then switch command will change
the metadata and data will be in partitioned table in less than a second.

In this example, we are moving the data (or switch the data) from non-partitioned table (orders_left_old) to 1st partition
of ‘Orders_left’
ALTER TABLE orders_left_old SWITCH TO Orders_left PARTITION 1;

Note:
You can switch non-partitioned table to another non-partitioned table. Requirement is both table should have same
structure and should reside in same filegroup. Here is the syntax:
ALTER TABLE Orders SWITCH TO Orders_Old

Of course, you can achieve the same thing sp_rename proc, but this is just another way to do it.

Best Practices

1) If you are doing sliding window technique, then consider RANGE LEFT for the table partition function. If you are not
purging/archiving the old data in the partitioned-table, then consider RANGE RIGHT.

2) If there are high inserts to table, you may have to consider hash partitioning. Of course, this is not much related to
table partitioning but another important concept you may have to implement.

3) When the tables and indexes are defined with not only the same partition function but also the same partition
scheme, they are considered to be storage-aligned.

Tips, Tricks and additional points

1) To Switch-in/Switch-out, you need to create staging tables with the identical structure of partition table. This tool will
help you create script for creating staging table.

http://sqlpartitionmgmt.codeplex.com/

Don’t run this tool directly against production server. Just script it out in DEV environment, go through it and execute it
against Prod.

2) You can compress individual partitions at different levels (page/row/none)


Alter Table dbo.yourTableName
Rebuild Partition = All
With
(
Data_Compression = Page On Partitions(1 to 9)
, Data_Compression = Row On Partitions(10 to 11)
, Data_Compression = None On Partitions(12)
);

Advantages of Partitioning

• Manageability– Manageability of partition table/index became easier as you can rebuild/re-organize indexes of each
partition separately. You can manage each partition separately; you can take a back-up of only the file-groups that
contain partitions having volatile data etc.
• Query Performance– The query optimizer uses techniques to optimize and improve the query performance. For
example,
o Partition elimination – Partition Elimination is a technique used by query optimizer to not consider partitions
that don’t contain data requested by the query. For example, if a query requests data for only the years 2010
and 2011, in that case only two partitions will be considered during query optimization and execution unlike a
single large table where query optimizer will consider the whole dataset; the other partitions (2008, 2009 and
2012) will be simply ignored.
o Parallel Processing – Query Optimizer uses a technique to process each partition in parallel or even multiple CPU
cores can work together to work on a single partition. With this, the query optimizer tries to utilize modern
hardware resources efficiently. For example, if a query requests data for only the years 2010 and 2011, in that
case only two partitions will be considered during query optimization and suppose if you have 8 cores machine,
all 8 cores can work together to produce the result from the two identified partitions.
• Indexes– You can have different settings (FILLFACTOR) or different numbers of indexes for each partition of a table.
For example, the most recent year partition will have volatile data and will be both read and write intensive data
and used by OLTP applications and hence you should have the minimum number of indexes, whereas older
partitions will have mostly read only data and be used by Analytical applications and hence you can create more
indexes to make your analytical queries run faster.
• Compression– Compression is new feature introduced with SQL Server 2008. It minimizes the need for storage space
at the cost of additional CPU cycles whenever data is read or written. Again, the most recent year partition will have
volatile data and be accessed frequently so ideally you should not compress it, whereas the older partitions will not
be accessed frequently and hence you can compress them to minimize the storage space requirement.
• Minimized time for Backup/Restore– For a large table, normally only the latest few partitions will be volatile and
hence you can take a regular backup of the file group (read-write) that contains this volatile data whereas you can
take occasional backups of the file group (read-only) that contains non-volatile data. This way, we can minimize the
downtime window and reduce the backup and restore time.
• Loading data to/from is fast– Data load in the partition table takes only seconds, instead of the minutes or hours of
operation when you have a non-partitioned table, using a technique called SWITCH-IN. I will talk in detail about this
in my next article.
• Data archival– Data archival from a partitioned table again takes only seconds, instead of the minutes or hours of
operation when you have a non-partitioned table, using a technique called SWITCH-OUT. I will talk in detail about
this in my next article.

Source: databasejournal.com

Find Metadata about PARTITION Functions/Schemes

Catalog views

Catalog views related to partition function

It specifies how many partitions exists for the table


SELECT * FROM sys.partitions WHERE object_name(object_id) = 'orders_left'
Contains a row for each partition function.
SELECT * FROM sys.partition_functions

Contains a row for each parameter of a partition function.


SELECT * FROM sys.partition_parameters

Contains a row for each range boundary value of a partition function of type R
SELECT * FROM sys.partition_range_values

Catalog views related to partition scheme

This will specify number of partition schemes in the database


SELECT * FROM sys.partition_schemes

This will list the data spaces in the database. Partition scheme and filegroup are generally considered as “data_space”
SELECT * FROM sys.data_spaces

Partition_schema is generally called as virtual data-space, which actually points to destination filegroups. This will show
mapping between partition scheme and final filegroups.
SELECT * FROM sys.destination_data_spaces

This will list the filegroups in the database.


SELECT * FROM sys.filegroups

$PARTITION System function

1)
Find out which partition this value belongs to:
SELECT $PARTITION.RangePF1(10001)

Let’s say partition function “RangePF1” has several ranges. If you pass a value to $PARTITION.function_name, it will
show which partition this value belongs to.

2)
Following SQL Statement will show the count of rows in each partition of the table.

SELECT $PARTITION.pFunc(create_dt) AS PARTITIONID,


COUNT(* ) AS ROW_COUNT
FROM DBO.Orders
GROUP BY $PARTITION.pFunc(create_dt)
ORDER BY PARTITIONID

3)
Returning all rows from one partition of a partitioned table or index. In this example, it retrieves all the rows in partition
# 5 of table Orders.
SELECT * FROM dbo.Orders
WHERE $PARTITION. pFunc(create_dt) = 5;

Partitioned table maintenance

Partitioning benefits the DBAs to maintain the large tables better in the following areas.
1. Index Maintenance of partitions
2. Partition FileGroup backups
3. Database consistency checks

1. INDEX fragmentation and defrag tips

You cannot REBUILD ONLINE a specific partition, but you can rebuild offline/reorg a specific partition. In SQL Server
2014, you can rebuild the partition ONLINE.
Current partition might be fragmented more than older partitions due to high transactional activity.

Note: REBUILD ONLINE is available in Enterprise edition only.

Find fragmentation level for partitioned table

Following query will list how fragmented each partitions are:


USE <Database_name>
GO
SELECT object_id, partition_number, avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats (DB_ID(),OBJECT_ID(N'dbo.Orders), NULL , NULL, NULL);

Rebuild offline specific partition

Following syntax will REBUILD OFFLINE specific partition (#4 in this example).
USE <Database_name>
GO
ALTER INDEX IX_COL1
ON dbo.Orders
REBUILD Partition = 4;

Based on the fragmentation percentage, it needs that much space extra in the data file.

REORG specific partition of a table

Following query will show how to reorganize the entire index:


USE <Database_name>
GO
ALTER INDEX IX_COL1
ON dbo.Orders REORGANIZE;

REORG reorganize the leaf level pages. Sometimes REORG might not bring the fragmentation level, in such scenario you
have to REBUILD.

2. Partition filegroup backups

You could use BACKUP DATABASE FILEGROUP statement to backup only certain partitions within a database residing on
a single filegroup. If the majority of fact table data does not change, there is no need to continuously backup static
partitions. You could backup filegroups which host static partitions once (or seldom) and regularly backup those
filegroups that contain changing data. For example, the following statement backs up only Filegroup_2007 of
AdventureWorksDW database:

/* change Filegroup_2007 to read_only before running a backup */


ALTER DATABASE AdventureWorksDW MODIFY FILEGROUP Filegroup_2007 READ_ONLY
GO
BACKUP DATABASE AdventureWorksDW FILEGROUP = 'Filegroup_2007' TO DISK = 'c:\Filegroup_2007_backup.bak'

The next statement backs up two filegroups:


BACKUP DATABASE AdventureWorksDW FILEGROUP = 'Filegroup_2004', FILEGROUP = 'Filegroup_2003' TO DISK =
'c:\Filegroups_2003_2004.bak'

Note that when you restore databases, you don't have to restore every filegroup before data becomes available to your
users. Suppose you experience a hardware failure and need to restore the database. Your data warehouse is 2 TB, but to
obtain results for 80% of user queries, only the most recent data is necessary. With SQL Server 2005, you can restore the
primary filegroup (which should be reserved for system objects) and the filegroup containing the most recent partitions.
At this point users can start querying data. You can later restore the remaining filegroups, perhaps during limited user
activity, after business hours or over the weekend.

Note: These pointed were copied from toadworld.com

3. Database consistency checks


You can check structural integrity of only certain partitions using DBCC CHECKFILEGROUP statement. The statement
affects all partitions which reside on the specified Filegroup.

DBCC CHECKFILEGROUP (Filegroup_2007)

Appendix A: Script to create RANGE RIGHT Partition table

1) Create the database.

CREATE DATABASE PartitionTest

2) Create the filegroups.

ALTER DATABASE PartitionTest ADD FILEGROUP FG2000


GO
ALTER DATABASE PartitionTest ADD FILEGROUP FG2012
GO
ALTER DATABASE PartitionTest ADD FILEGROUP FG2013
GO
ALTER DATABASE PartitionTest ADD FILEGROUP FG2014
GO

3) Create the files for the filegroups.

ALTER DATABASE PartitionTest


ADD FILE
(NAME = N'data_2000',
FILENAME = N'E:\SQLData\data_2000.ndf',
SIZE = 10MB,
MAXSIZE = 10MB,
FILEGROWTH = 10MB)
TO FILEGROUP [FG2000]
GO
ALTER DATABASE PartitionTest
ADD FILE
(NAME = N'data_2012',
FILENAME = N'E:\SQLData\data_2012.ndf',
SIZE = 10MB,
MAXSIZE = 10MB,
FILEGROWTH = 10MB)
TO FILEGROUP [FG2012]
GO
ALTER DATABASE PartitionTest
ADD FILE
(NAME = N'data_2013',
FILENAME = N'E:\SQLData\data_2013.ndf',
SIZE = 10MB,
MAXSIZE = 10MB,
FILEGROWTH = 10MB)
TO FILEGROUP [FG2013]
GO
ALTER DATABASE PartitionTest
ADD FILE
(NAME = N'data_2014',
FILENAME = N'E:\SQLData\data_2014.ndf',
SIZE = 10MB,
MAXSIZE = 10MB,
FILEGROWTH = 10MB)
TO FILEGROUP [FG2014]
GO

4) Create partition function

USE PartitionTest
GO

DECLARE @boundary1 DATETIME


DECLARE @boundary2 DATETIME
DECLARE @boundary3 DATETIME
SELECT @boundary1 = '20120101', @boundary2 = '20130101', @boundary3 = '20140101'

CREATE PARTITION FUNCTION p_func_right(DATETIME) AS


RANGE RIGHT FOR VALUES
(@boundary1, @boundary2, @boundary3)

4) Create partition scheme

USE PartitionTest
GO

CREATE PARTITION SCHEME p_scheme_right AS


PARTITION p_func_right TO
([FG2000],
[FG2012],
[FG2013],
[FG2014])

5) Create table Orders

CREATE TABLE orders_right


(
id INT IDENTITY(1, 1),
fname VARCHAR(20),
lname VARCHAR(20),
create_dt DATETIME not null,
constraint pk_orders_right PRIMARY KEY CLUSTERED (create_dt)
)
ON p_scheme_right (create_dt)

6)
Insert some rows into the table:

insert into orders_right values('Ramu', 'Poonjolai', '2010-01-01')


insert into orders_right values('Ramu', 'Poonjolai', '2010-02-01')
insert into orders_right values('Ramu', 'Poonjolai', '2011-01-01')
insert into orders_right values('Ramu', 'Poonjolai', '2011-02-01')
insert into orders_right values('Ramu', 'Poonjolai', '2012-01-01')
insert into orders_right values('Ramu', 'Poonjolai', '2012-02-01')
insert into orders_right values('Ramu', 'Poonjolai', '2013-01-01')
insert into orders_right values('Ramu', 'Poonjolai', '2013-02-01')
insert into orders_right values('Ramu', 'Poonjolai', '2014-01-01')
insert into orders_right values('Ramu', 'Poonjolai', '2014-02-01')
insert into orders_right values('Ramu', 'Poonjolai', '2014-12-31 23:59:59')
insert into orders_right values('Ramu', 'Poonjolai', '2015-01-01')
insert into orders_right values('Ramu', 'Poonjolai', '2015-02-01')

6)
We can use $PARTITION function to retrieve row counts for each partition:

SELECT $PARTITION.p_func_right (create_dt) AS partition_id,


COUNT(*) AS row_count
FROM dbo.orders_right
GROUP BY $PARTITION.p_func_right (create_dt)
ORDER BY 1

You could retrieve all records for a particular partition identifier using the following syntax, again using $PARTITION
function:

SELECT * FROM dbo.orders_right


WHERE $PARTITION.p_func_right (create_dt) = 2

Appendix B: Script to create RANGE LEFT Partition table

This example will create LEFT partition function, partition scheme, table on the scheme. Splitting the range will assign
new file to LEFT side.

1) Create the database.

CREATE DATABASE PartitionTest

2) Create the filegroups.

ALTER DATABASE PartitionTest ADD FILEGROUP FGL2000


GO
ALTER DATABASE PartitionTest ADD FILEGROUP FGL2011
GO
ALTER DATABASE PartitionTest ADD FILEGROUP FGL2012
GO
ALTER DATABASE PartitionTest ADD FILEGROUP FGL2013
GO
ALTER DATABASE PartitionTest ADD FILEGROUP FGL2014
GO

3) Create the files for the filegroups.

ALTER DATABASE PartitionTest


ADD FILE
(NAME = N'data_l_2000',
FILENAME = N'E:\SQLData\data_l_2000.ndf',
SIZE = 10MB,
MAXSIZE = 10MB,
FILEGROWTH = 10MB)
TO FILEGROUP [FGL2000]
GO
ALTER DATABASE PartitionTest
ADD FILE
(NAME = N'data_l_2011',
FILENAME = N'E:\SQLData\data_l_2011.ndf',
SIZE = 10MB,
MAXSIZE = 10MB,
FILEGROWTH = 10MB)
TO FILEGROUP [FGL2011]
GO
ALTER DATABASE PartitionTest
ADD FILE
(NAME = N'data_l _2012',
FILENAME = N'E:\SQLData\data_l_2012.ndf',
SIZE = 10MB,
MAXSIZE = 10MB,
FILEGROWTH = 10MB)
TO FILEGROUP [FGL2012]
GO
ALTER DATABASE PartitionTest
ADD FILE
(NAME = N'data_l _2013',
FILENAME = N'E:\SQLData\data_l_2013.ndf',
SIZE = 10MB,
MAXSIZE = 10MB,
FILEGROWTH = 10MB)
TO FILEGROUP [FGL2013]
GO
ALTER DATABASE PartitionTest
ADD FILE
(NAME = N'data_l _2014',
FILENAME = N'E:\SQLData\data_l_2014.ndf',
SIZE = 10MB,
MAXSIZE = 10MB,
FILEGROWTH = 10MB)
TO FILEGROUP [FGL2014]
GO

4)
Create partition function

DECLARE @boundary1 DATETIME


DECLARE @boundary2 DATETIME
DECLARE @boundary3 DATETIME
SELECT @boundary1 = '20120101', @boundary2 = '20130101', @boundary3 = '20140101'

CREATE PARTITION FUNCTION p_func_left(DATETIME) AS


RANGE LEFT FOR VALUES
(@boundary1, @boundary2, @boundary3)

5)
Create partition scheme

CREATE PARTITION SCHEME p_scheme_left AS


PARTITION p_func_left TO
([FGL2000],
[FGL2012],
[FGL2013],
[FGL2014])

6)
Create the table

CREATE TABLE orders_left


(
id INT IDENTITY(1, 1),
fname VARCHAR(20),
lname VARCHAR(20),
create_dt DATETIME
)
ON p_scheme_left (create_dt)

7) Insert few rows into the table “orders_left”


insert into orders_left values('Ramu', 'Poonjolai', '2010-01-01')
insert into orders_left values('Ramu', 'Poonjolai', '2004-01-01')
insert into orders_left values('Ramu', 'Poonjolai', '20101231 23:59:59.997')
insert into orders_left values('Ramu', 'Poonjolai', '2011-01-01')
insert into orders_left values('Ramu', 'Poonjolai', '2011-02-01')
insert into orders_left values('Ramu', 'Poonjolai', '20111231 23:59:59.997')
insert into orders_left values('Ramu', 'Poonjolai', '2012-01-01')
insert into orders_left values('Ramu', 'Poonjolai', '2012-02-01')
insert into orders_left values('Ramu', 'Poonjolai', '2013-01-01')
insert into orders_left values('Ramu', 'Poonjolai', '2013-02-01')
insert into orders_left values('Ramu', 'Poonjolai', '20131231 23:59:59.997')
insert into orders_left values('Ramu', 'Poonjolai', '2014-01-01')
insert into orders_left values('Ramu', 'Poonjolai', '2014-02-01')
insert into orders_left values('Ramu', 'Poonjolai', '20141231 23:59:59.997')
insert into orders_left values('Ramu', 'Poonjolai', '2015-01-01')
insert into orders_left values('Ramu', 'Poonjolai', '2015-02-01')

8) Select the rows from the table.

select * from orders_left


order by create_dt
;

SELECT $PARTITION.p_func_left(create_dt) AS PARTITIONID,


COUNT(* ) AS ROW_COUNT
FROM DBO.Orders_left
GROUP BY $PARTITION.p_func_left(create_dt)
ORDER BY PARTITIONID
;

9)
Split the partition. Introduce new boundary of 01-01-2011.

ALTER PARTITION SCHEME p_scheme_left NEXT USED [FGL2011]

ALTER PARTITION FUNCTION p_func_left() SPLIT RANGE ('20141231 23:59:59.997')

10)
Select the rows from partition 2011. Newly added partition will be on LEFT SIDE since we specified RANGE LEFT.

select * from orders_left


order by create_dt
;

SELECT $PARTITION.p_func_left(create_dt) AS PARTITIONID,


COUNT(* ) AS ROW_COUNT
FROM dbo.orders_left
GROUP BY $PARTITION.p_func_left(create_dt)
ORDER BY PARTITIONID
;

SELECT * FROM dbo.orders


WHERE $PARTITION.pFunc(create_dt) = 2
;

Appendix C: Sample scripts


Sample scripts used in RANGE RIGHT demo
DECLARE @boundary1 DATETIME
DECLARE @boundary2 DATETIME
DECLARE @boundary3 DATETIME
SELECT @boundary1 = '20120101', @boundary2 = '20130101', @boundary3 = '20140101'

CREATE PARTITION FUNCTION p_func_right(DATETIME) AS


RANGE RIGHT FOR VALUES (@boundary2, @boundary1, @boundary3)
-----------------
CREATE PARTITION SCHEME p_scheme_right AS
PARTITION p_func_right TO
([FGL2000],
[FGL2012],
[FGL2013],
[FGL2014])
------------------
CREATE TABLE orders_right
(
id INT IDENTITY(1, 1),
fname VARCHAR(20),
lname VARCHAR(20),
create_dt DATETIME,
primary key (create_dt)
)
ON p_scheme_right (create_dt)
-------------------
insert into orders_right values('Ramu', 'Poonjolai', '2010-01-01')
insert into orders_right values('Ramu', 'Poonjolai', '2010-02-01')
insert into orders_right values('Ramu', 'Poonjolai', '2011-01-01')
insert into orders_right values('Ramu', 'Poonjolai', '2011-02-01')
insert into orders_right values('Ramu', 'Poonjolai', '2012-01-01')
insert into orders_right values('Ramu', 'Poonjolai', '2012-02-01')
insert into orders_right values('Ramu', 'Poonjolai', '2013-01-01')
insert into orders_right values('Ramu', 'Poonjolai', '2013-02-01')
insert into orders_right values('Ramu', 'Poonjolai', '2014-01-01')
insert into orders_right values('Ramu', 'Poonjolai', '2014-02-01')
insert into orders_right values('Ramu', 'Poonjolai', '2014-12-31 23:59:59')
insert into orders_right values('Ramu', 'Poonjolai', '2015-01-01')
insert into orders_right values('Ramu', 'Poonjolai', '2015-02-01')
------------------
select * from orders_right

SELECT $PARTITION.p_func_right(create_dt) as part_num,* FROM dbo.orders_right

SELECT $PARTITION.p_func_right(create_dt) AS PARTITIONID,COUNT(*) AS ROW_COUNT


FROM dbo.orders_right
GROUP BY $PARTITION.p_func_right(create_dt)
ORDER BY PARTITIONID
------------------
SELECT * FROM sys.partitions WHERE object_name(object_id) = 'orders_right'
SELECT * FROM sys.partition_functions
SELECT * FROM sys.partition_parameters
SELECT * FROM sys.partition_range_values
------------------
SELECT * FROM sys.partition_schemes
SELECT * FROM sys.data_spaces
SELECT * FROM sys.destination_data_spaces
SELECT * FROM sys.filegroups
---------------------------------
SELECT * FROM dbo.orders_right
WHERE $PARTITION.p_func_right (create_dt) = 2
------------------
ALTER PARTITION SCHEME p_scheme_right NEXT USED [FGL2011]
-------------------
ALTER PARTITION FUNCTION p_func_right() SPLIT RANGE ('20110101')
-------------------
ALTER PARTITION FUNCTION p_func_right() MERGE RANGE ('20110101')

Sample scripts used in RANGE LEFT demo


DECLARE @boundary1 DATETIME
DECLARE @boundary2 DATETIME
DECLARE @boundary3 DATETIME
SELECT @boundary1 = '20111231 23:59:59.997', @boundary2 = '20121231 23:59:59.997', @boundary3 = '20131231 23:59:59.997'

CREATE PARTITION FUNCTION p_func_left(DATETIME) AS


RANGE LEFT FOR VALUES (@boundary1, @boundary2, @boundary3)
-----------------
CREATE PARTITION SCHEME p_scheme_left AS
PARTITION p_func_left TO
([FGLeft2000],
[FGLeft2012],
[FGLeft2013],
[FGLeft2014])
------------------
CREATE TABLE orders_left
(
id INT IDENTITY(1, 1),
fname VARCHAR(20),
lname VARCHAR(20),
create_dt DATETIME,
primary key (create_dt)
)
ON p_scheme_left (create_dt)
------------------
insert into orders_left values('Ramu', 'Poonjolai', '2010-01-01')
insert into orders_left values('Ramu', 'Poonjolai', '2004-01-01')
insert into orders_left values('Ramu', 'Poonjolai', '20101231 23:59:59.997')
insert into orders_left values('Ramu', 'Poonjolai', '2011-01-01')
insert into orders_left values('Ramu', 'Poonjolai', '2011-02-01')
insert into orders_left values('Ramu', 'Poonjolai', '20111231 23:59:59.997')
insert into orders_left values('Ramu', 'Poonjolai', '2012-01-01')
insert into orders_left values('Ramu', 'Poonjolai', '2012-02-01')
insert into orders_left values('Ramu', 'Poonjolai', '2013-01-01')
insert into orders_left values('Ramu', 'Poonjolai', '2013-02-01')
insert into orders_left values('Ramu', 'Poonjolai', '20131231 23:59:59.997')
insert into orders_left values('Ramu', 'Poonjolai', '2014-01-01')
insert into orders_left values('Ramu', 'Poonjolai', '2014-02-01')
insert into orders_left values('Ramu', 'Poonjolai', '20141231 23:59:59.997')
insert into orders_left values('Ramu', 'Poonjolai', '2015-01-01')
insert into orders_left values('Ramu', 'Poonjolai', '2015-02-01')
--delete from orders_left
------------------
select * from orders_left

select $PARTITION.p_func_left(create_dt) as part_num,* FROM dbo.orders_left

SELECT $PARTITION.p_func_left(create_dt) AS PARTITIONID,COUNT(*) AS ROW_COUNT


FROM dbo.orders_left
GROUP BY $PARTITION.p_func_left(create_dt)
ORDER BY PARTITIONID
-- delete from dbo.orders_left where $PARTITION.p_func_left(create_dt) = 4
------------------
CREATE TABLE [dbo].[orders_2000](
[id] [int] IDENTITY(1,1) NOT NULL,
[fname] [varchar](20) NULL,
[lname] [varchar](20) NULL,
[create_dt] [datetime] NULL
) ON [FGLeft2000]

select * from orders_2000


------------------
SELECT $PARTITION.p_func_right (create_dt) partition_num,* FROM dbo.orders_left --WHERE $PARTITION.p_func_right
(create_dt) = 1
------------------
SELECT * FROM dbo.orders_left
------------------
ALTER PARTITION SCHEME p_scheme_left NEXT USED [FGLeft2011]
-------------------
ALTER PARTITION FUNCTION p_func_left() SPLIT RANGE ('20141231 23:59:59.997')
-------------------
SELECT * FROM sys.partitions WHERE object_name(object_id) = 'orders_left'
SELECT * FROM sys.partition_functions
SELECT * FROM sys.partition_parameters
SELECT * FROM sys.partition_range_values
------------------
SELECT * FROM sys.partition_schemes
SELECT * FROM sys.data_spaces
SELECT * FROM sys.destination_data_spaces
SELECT * FROM sys.filegroups
-------------------
ALTER PARTITION FUNCTION p_func_left() MERGE RANGE ('20101231 23:59:59.997')
-------------------
ALTER TABLE orders_left SWITCH PARTITION 1 TO Orders_2000
Sample script to find the expensive queries
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROC [dbo].[usp_Worst_TSQL]
/*
Written by: Gregory A. Larsen
Copyright © 2008 Gregory A. Larsen. All rights reserved.

Name: usp_Worst_TSQL
Description: This stored procedure displays the top worst performing queries based on CPU, Execution Count,
I/O and Elapsed_Time as identified using DMV information. This can be display the worst
performing queries from an instance, or database perspective. The number of records shown,
the database, and the sort order are identified by passing pararmeters.

Parameters: There are three different parameters that can be passed to this procedures: @DBNAME, @COUNT
and @ORDERBY. The @DBNAME is used to constraint the output to a specific database. If
when calling this SP this parameter is set to a specific database name then only statements
that are associated with that database will be displayed. If the @DBNAME parameter is not set
then this SP will return rows associated with any database. The @COUNT parameter allows you
to control the number of rows returned by this SP. If this parameter is used then only the
TOP x rows, where x is equal to @COUNT will be returned, based on the @ORDERBY parameter.
The @ORDERBY parameter identifies the sort order of the rows returned in descending order.
This @ORDERBY parameters supports the following type: CPU, AE, TE, EC or AIO, TIO, ALR, TLR, ALW, TLW, APR,
and TPR
where "ACPU" represents Average CPU Usage
"TCPU" represents Total CPU usage
"AE" represents Average Elapsed Time
"TE" represents Total Elapsed Time
"EC" represents Execution Count
"AIO" represents Average IOs
"TIO" represents Total IOs
"ALR" represents Average Logical Reads
"TLR" represents Total Logical Reads
"ALW" represents Average Logical Writes
"TLW" represents Total Logical Writes
"APR" represents Average Physical Reads
"TPR" represents Total Physical Read

Typical execution calls

Top 6 statements in the AdventureWorks database base on Average CPU Usage:


EXEC usp_Worst_TSQL @DBNAME='AdventureWorks',@COUNT=6,@ORDERBY='ACPU';

Top 100 statements order by Average IO


EXEC usp_Worst_TSQL @COUNT=100,@ORDERBY='ALR';

Show top all statements by Average IO


EXEC usp_Worst_TSQL;

*/
(@DBNAME VARCHAR(128) = '<not supplied>'
,@COUNT INT = 999999999
,@ORDERBY VARCHAR(4) = 'AIO')
AS
-- Check for valid @ORDERBY parameter
IF ((SELECT CASE WHEN
@ORDERBY in ('ACPU','TCPU','AE','TE','EC','AIO','TIO','ALR','TLR','ALW','TLW','APR','TPR')
THEN 1 ELSE 0 END) = 0)
BEGIN
-- abort if invalid @ORDERBY parameter entered
RAISERROR('@ORDERBY parameter not APCU, TCPU, AE, TE, EC, AIO, TIO, ALR, TLR, ALW, TLW, APR or TPR',11,1)
RETURN
END
SELECT TOP (@COUNT)
COALESCE(DB_NAME(st.dbid),
DB_NAME(CAST(pa.value AS INT))+'*',
'Resource') AS [Database Name]
-- find the offset of the actual statement being executed
,SUBSTRING(text,
CASE WHEN statement_start_offset = 0
OR statement_start_offset IS NULL
THEN 1
ELSE statement_start_offset/2 + 1 END,
CASE WHEN statement_end_offset = 0
OR statement_end_offset = -1
OR statement_end_offset IS NULL
THEN LEN(text)
ELSE statement_end_offset/2 END -
CASE WHEN statement_start_offset = 0
OR statement_start_offset IS NULL
THEN 1
ELSE statement_start_offset/2 END + 1
) AS [Statement]
,OBJECT_SCHEMA_NAME(st.objectid,dbid) [Schema Name]
,OBJECT_NAME(st.objectid,dbid) [Object Name]
,objtype [Cached Plan objtype]
,execution_count [Execution Count]
,(total_logical_reads + total_logical_writes + total_physical_reads )/execution_count [Average IOs]
,total_logical_reads + total_logical_writes + total_physical_reads [Total IOs]
,total_logical_reads/execution_count [Avg Logical Reads]
,total_logical_reads [Total Logical Reads]
,total_logical_writes/execution_count [Avg Logical Writes]
,total_logical_writes [Total Logical Writes]
,total_physical_reads/execution_count [Avg Physical Reads]
,total_physical_reads [Total Physical Reads]
,total_worker_time / execution_count [Avg CPU]
,total_worker_time [Total CPU]
,total_elapsed_time / execution_count [Avg Elapsed Time]
,total_elapsed_time [Total Elasped Time]
,last_execution_time [Last Execution Time]
FROM sys.dm_exec_query_stats qs
JOIN sys.dm_exec_cached_plans cp ON qs.plan_handle = cp.plan_handle
CROSS APPLY sys.dm_exec_sql_text(qs.plan_handle) st
OUTER APPLY sys.dm_exec_plan_attributes(qs.plan_handle) pa
WHERE attribute = 'dbid' AND
CASE when @DBNAME = '<not supplied>' THEN '<not supplied>'
ELSE COALESCE(DB_NAME(st.dbid),
DB_NAME(CAST(pa.value AS INT)) + '*',
'Resource') END
IN (RTRIM(@DBNAME),RTRIM(@DBNAME) + '*')

ORDER BY CASE
WHEN @ORDERBY = 'ACPU' THEN total_worker_time / execution_count
WHEN @ORDERBY = 'TCPU' THEN total_worker_time
WHEN @ORDERBY = 'AE' THEN total_elapsed_time / execution_count
WHEN @ORDERBY = 'TE' THEN total_elapsed_time
WHEN @ORDERBY = 'EC' THEN execution_count
WHEN @ORDERBY = 'AIO' THEN (total_logical_reads + total_logical_writes + total_physical_reads) /
execution_count
WHEN @ORDERBY = 'TIO' THEN total_logical_reads + total_logical_writes + total_physical_reads
WHEN @ORDERBY = 'ALR' THEN total_logical_reads / execution_count
WHEN @ORDERBY = 'TLR' THEN total_logical_reads
WHEN @ORDERBY = 'ALW' THEN total_logical_writes / execution_count
WHEN @ORDERBY = 'TLW' THEN total_logical_writes
WHEN @ORDERBY = 'APR' THEN total_physical_reads / execution_count
WHEN @ORDERBY = 'TPR' THEN total_physical_reads
END DESC

Source: databasejournal.com

SQL Server hash partitioning

Sometimes, if you are doing heavy inserts into table that has IDENITY column which
inserts the records sequentially, then PAGELATCH_UP & PAGELATCH_EX wait stats. Inserts
will be slowed down because records will be written to last page of B-Tree index. Due to
competition from several process to write records into last page of b-tree, you will see
pagelatch wait stats. To reduce contention, you may have to use hash partitioning on the
table.

1)
Create partition function and scheme

CREATE PARTITION FUNCTION pf_hash (INT) AS RANGE LEFT FOR VALUES (0, 1, 2)
CREATE PARTITION SCHEME ps_hash AS PARTITION pf_hash ALL TO ([PRIMARY])
2)
Create the table using parititon scheme
CREATE TABLE HeavyInsert_Hash (
ID INT NOT NULL
, Col1 VARCHAR (50)
, HashID AS CAST (ABS (ID% 4) AS TINYINT) PERSISTED NOT NULL)

3)
Create unique index on partition scheme
CREATE UNIQUE CLUSTERED INDEX CIX_Hash
ON HeavyInsert_Hash (ID, HashID) ON ps_hash (HashID)

4)
SELECTs will suffer because it has to go through all 4 b-trees(for all 4 partitions) when
you select by id.
SELECT * FROM HeavyInsert_Hash WHERE ID = 42

If you have to eliminate the partitions, you may have rewrite the queries like this:
SELECT * FROM HeavyInsert_Hash WHERE ID = 42 AND HashID = 42% 4

Source: sqlcat.com

S-ar putea să vă placă și