Documente Academic
Documente Profesional
Documente Cultură
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
To manage (query, monitor, etc.) your database you need a tool. There are several database tools available. While
your databases can be running in the cloud, on Windows, or on Linux, your tool doesn't need to run on the same
platform as the database.
This article provides information about the available tools for working with your SQL databases.
Microsoft SQL Operations Studio (preview) SQL Operations Studio (preview) is a free, light-weight tool, for
managing databases wherever they're running. This preview
release provides database management features, including an
extended Transact-SQL editor and customizable insights into
the operational state of your databases. SQL Operations
Studio (preview) runs on Windows, macOS, and Linux.
SQL Server Management Studio (SSMS) Use SQL Server Management Studio (SSMS) to query, design,
and manage your SQL Server, Azure SQL Database, and Azure
SQL Data Warehouse. SSMS runs on Windows.
SQL Server Data Tools (SSDT) Turn Visual Studio into a powerful development environment
for SQL Server, Azure SQL Database, and Azure SQL Data
Warehouse. SSDT runs on Windows.
Visual Studio Code After installing Visual Studio Code, install the mssql extension
for developing Microsoft SQL Server, Azure SQL Database, and
SQL Data Warehouse. Visual Studio Code runs on
Windows, macOS, and Linux.
Additional tools
TOOL DESCRIPTION
SQL Server Migration Assistant Use SQL Server Migration Assistant to automate database
migration to SQL Server from Microsoft Access, DB2, MySQL,
Oracle, and Sybase.
Distributed Replay Use the Distributed Replay feature to help you assess the
impact of future SQL Server upgrades. Also use Distributed
Replay to help assess the impact of hardware and operating
system upgrades, and SQL Server tuning.
bcp Utility Used to copy data between an instance <drive:>\Program Files\ Microsoft SQL
of Microsoft SQL Server and a data file Server\Client
in a user-specified format. SDK\ODBC\110\Tools\Binn
Deploy Model Solutions with the Used to deploy Analysis Services <drive>:\Program Files\Microsoft SQL
Deployment Utility projects to instances of Analysis Server\nnn\Tools\Binn\VShell\Common
Services. 7\IDE
UTILITY DESCRIPTION INSTALLED IN
mssql-scripter (Public Preview) Used to generate CREATE and INSERT See our GitHub repo for download and
T-SQL scripts for database objects in usage information.
SQL Server, Azure SQL Database, and
Azure SQL Data Warehouse.
Profiler Utility Used to start SQL Server Profiler from a <drive>:\Program Files\Microsoft SQL
command prompt. Server\nnn\Tools\Binn
RS.exe Utility (SSRS) Used to run scripts designed for <drive>:\Program Files\Microsoft SQL
managing Reporting Services report Server\nnn\Tools\Binn
servers.
rsconfig Utility (SSRS) Used to configure a report server <drive>:\Program Files\Microsoft SQL
connection. Server\nnn\Tools\Binn
rskeymgmt Utility (SSRS) Used to manage encryption keys on a <drive>:\Program Files\Microsoft SQL
report server. Server\nnn\Tools\Binn
sqlagent90 Application Used to start SQL Server Agent from a <drive>:\Program Files\Microsoft SQL
command prompt. Server\<instance_name>\MSSQL\Binn
sqlcmd Utility Allows you to enter Transact-SQL <drive:>\Program Files\ Microsoft SQL
statements, system procedures, and Server\Client
script files at the command prompt. SDK\ODBC\110\Tools\Binn
sqlps Utility Used to run PowerShell commands and <drive>:\Program Files\Microsoft SQL
scripts. Loads and registers the SQL Server\nnn\Tools\Binn
Server PowerShell provider and cmdlets.
sqlservr Application Used to start and stop an instance of <drive>:\Program Files\Microsoft SQL
Database Engine from the command Server\MSSQL13.MSSQLSERVER\MSSQ
prompt for troubleshooting. L\Binn
UTILITY DESCRIPTION INSTALLED IN
Ssms Utility Used to start SQL Server Management <drive>:\Program Files\Microsoft SQL
Studio from a command prompt. Server\nnn\Tools\Binn\VSShell\Common
7\IDE
tablediff Utility Used to compare the data in two tables <drive>:\Program Files\Microsoft SQL
for non-convergence, which is useful Server\nnn\COM
when troubleshooting a replication
topology.
Nearly every day Microsoft updates some of its existing articles on its Docs.Microsoft.com documentation website.
This article displays excerpts from recently updated articles. Links to new articles might also be listed.
This article is generated by a program that is rerun periodically. Occasionally an excerpt can appear with imperfect
formatting, or as markdown from the source article. Images are never displayed here.
Recent updates are reported for the following date range and subject:
Date range of updates: 2018-02-03 -to- 2018-04-28
Subject area: Tools for SQL Server.
1. bcp Utility
Updated: 2018 -04 -25
-G This switch is used by the client when connecting to Azure SQL Database or Azure SQL Data Warehouse to
specify that the user be authenticated using Azure Active Directory authentication. The -G switch requires version
14.0.3008.27 or later. To determine your version, execute bcp -v. For more information, see Use Azure Active
Directory Authentication for authentication with SQL Database or SQL Data Warehouse.
TIP
To check if your version of bcp includes support for Azure Active Directory Authentication (AAD) type bcp -- (bcp<space>
<dash><dash>) and verify that you see -G in the list of available arguments.
The following example imports data using Azure AD Username and Password where user and password is
an AAD credential. The example imports data from file c:\last\data1.dat into table bcptest for database
testdb on Azure server aadserver.database.windows.net using Azure AD User/Password:
SQL Operations Studio (preview ) is a free tool that runs on Windows, macOS, and Linux, for managing SQL
Server, Azure SQL Database, and Azure SQL Data Warehouse; wherever they're running.
Download and Install SQL Operations Studio (preview)
Integrated Terminal
Use your favorite command-line tools (for example, Bash, PowerShell, sqlcmd, bcp, and ssh) in the Integrated
Terminal window right within the SQL Operations Studio (preview ) user interface. To learn about the integrated
terminal, see Integrated terminal.
Next steps
Download and Install SQL Operations Studio (preview )
Connect and query SQL Server
Connect and query Azure SQL Database
Download SQL Server Management Studio (SSMS)
6/26/2018 • 6 minutes to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
SSMS is an integrated environment for managing any SQL infrastructure, from SQL Server to SQL Database.
SSMS provides tools to configure, monitor, and administer instances of SQL. Use SSMS to deploy, monitor, and
upgrade the data-tier components used by your applications, as well as build queries and scripts.
Use SQL Server Management Studio (SSMS ) to query, design, and manage your databases and data warehouses,
wherever they are - on your local computer, or in the cloud.
SSMS is free!
SSMS 17.x is the latest generation of SQL Server Management Studio and provides support for SQL Server 2017.
Download SQL Server Management Studio 17.8.1 Upgrade Package (upgrades 17.x to 17.8.1)
Version Information
Release number: 17.8.1
Build number: 14.0.17277.0
Release date: June 26, 2018
The SSMS 17.x installation does not upgrade or replace SSMS versions 16.x or earlier. SSMS 17.x installs side by
side with previous versions so both versions are available for use. If a computer contains side by side installations
of SSMS, verify you start the correct version for your specific needs. The latest version is labeled Microsoft SQL
Server Management Studio 17, and has a new icon:
Available Languages
NOTE
Non-English localized releases of SSMS require the KB 2862966 security update package if installing on: Windows 8, Windows
7, Windows Server 2012, and Windows Server 2008 R2.
NOTE
The SQL Server PowerShell module is now a separate install through the PowerShell Gallery. For more information, see
Download SQL Server PowerShell Module.
Release Notes
The following are issues and limitations with this 17.8 release:
Clicking the Script button after modifying any filegroup property in the Properties window, generates two
scripts – one script with a USE statement, and a second script with a USE master statement. The script with USE
master is generated in error and should be discarded. Run the script that contains the USE statement.
Some dialogs display an invalid edition error when working with new General Purpose or Business Critical
Azure SQL Database editions.
Some latency in XEvents viewer may be observed. This is a known issue in the .Net Framework. Please,
consider upgrading to NetFx 4.7.2.
3. Uninstall Microsoft Visual C++ 2015 Redistributable the same way you uninstall any application. Uninstall
both x86 and x64 if they're on your computer.
4. Reinstall Visual Studio 2015 IsoShell from an elevated cmd prompt:
PUSHD "C:\ProgramData\Package Cache\FE948F0DAB52EB8CB5A740A77D8934B9E1A8E301\redist"
vs_isoshell.exe /PromptRestart
5. Reinstall SSMS.
6. Upgrade to the latest version of the Visual C++ 2015 Redistributable if you're not currently up to date.
Previous releases
Previous SQL Server Management Studio Releases
Feedback
SQL Client Tools Forum
Get Help
UserVoice - Suggestion to improve SQL Server?
Setup and Upgrade - MSDN Forum
SQL Server Data Tools - MSDN forum
Transact-SQL - MSDN forum
DBA Stack Exchange (tag sql-server) - ask SQL Server questions
Stack Overflow (tag sql-server) - also has some answers about SQL development
Reddit - general discussion about SQL Server
Microsoft SQL Server License Terms and Information
Support options for business users
Contact Microsoft
See Also
Tutorial: SQL Server Management Studio
SQL Server Management Studio documentation
Additional updates and service packs
Download SQL Server Data Tools (SSDT)
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
SQL Server Data Tools is a modern development tool for building SQL Server relational databases, Azure SQL
databases, Analysis Services (AS ) data models, Integration Services (IS ) packages, and Reporting Services (RS )
reports. With SSDT, you can design and deploy any SQL Server content type with the same ease as you would
develop an application in Visual Studio.
For most users, SQL Server Data Tools (SSDT ) is installed during Visual Studio installation. Installing SSDT using
the Visual Studio installer adds the base SSDT functionality, so you still need to run the SSDT standalone installer to
get AS, IS, and RS tools.
IMPORTANT
Before installing SSDT for Visual Studio 2017 (15.7.1), uninstall Analysis Services Projects and Reporting Services Projects
extensions if they are already installed, and close all VS instances.
When installing SSDT on Windows 10 and choosing Install new SQL Server Data Tools for Visual Studio 2017
instance, please clear any checkbox and install the new instance first. After the new instance is installed, please reboot the
computer and open the SSDT installer again to continue the installation.
Version Information
Release number: 15.7.1
Build number: 14.0.16167.0
Release date: July 02, 2018
For a complete list of changes, see the changelog.
SSDT for Visual Studio 2017 has the same system requirements as Visual Studio.
Available Languages - SSDT for VS 2017
This release of SSDT for VS 2017 can be installed in the following languages:
Chinese (People's Republic of China) | Chinese (Taiwan) | English (United States) | French
German | Italian | Japanese | Korean | Portuguese (Brazil) | Russian | Spanish
NOTE
The SSDT for VS 2015 17.4 ISO images are now available.
Chinese (People's Republic of China) | Chinese (Taiwan) | English (United States) | French
German | Italian | Japanese | Korean | Portuguese (Brazil) | Russian | Spanish
Next steps
After installing SSDT, work through these tutorials to learn how to create databases, packages, data models, and
reports using SSDT:
Project-Oriented Offline Database Development
SSIS Tutorial: Create a Simple ETL Package
Analysis Services tutorials
Create a Basic Table Report (SSRS Tutorial)
Get Help
UserVoice - Suggestion to improve SQL Server?
Setup and Upgrade - MSDN Forum
SQL Server Data Tools - MSDN forum
Transact-SQL - MSDN forum
DBA Stack Exchange (tag sql-server) - ask SQL Server questions
Stack Overflow (tag sql-server) - also has some answers about SQL development
Reddit - general discussion about SQL Server
Microsoft SQL Server License Terms and Information
Support options for business users
Contact Microsoft
See Also
SSDT MSDN Forum
SSDT Team Blog
DACFx API Reference
Download SQL Server Management Studio (SSMS )
mssql-cli command-line query tool for SQL Server
5/17/2018 • 2 minutes to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
mssql-cli is an interactive command-line tool for querying SQL Server, install it on Windows, macOS, or Linux.
Install mssql-cli
For detailed installation instructions, see the Installation Guide, or if you know pip, install by running the following
command:
$ pip install mssql-cli
mssql-cli documentation
Documentation for mssql-cli is located in the mssql-cli GitHub repository.
Main page/readme
Installation Guide
Usage Guide
Additional documentation is located in the doc folder.
SQL Server Configuration Manager Help
5/3/2018 • 2 minutes to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (Windows only) Azure SQL Database Azure SQL Data
Warehouse Parallel Data Warehouse
Use SQL Server Configuration Manager to configure SQL Server services and configure network connectivity. To
create or manage database objects, configure security, and write Transact-SQL queries, use SQL Server
Management Studio. For more information about SQL Server Management Studio, see SQL Server Books Online.
TIP
If you need to configure SQL Server on Linux, use the mssql-conf tool. For more information, see Configure SQL Server on
Linux with the mssql-conf tool.
This section contains the F1 Help topics for the dialogs in SQL Server Configuration Manager.
NOTE
SQL Server Configuration Manager cannot configure versions of SQL Server earlier than Microsoft SQL Server 2005.
Services
SQL Server Configuration Manager manages services that are related to SQL Server. Although many of these
tasks can be accomplished using the Microsoft Windows Services dialog, is important to note that SQL Server
Configuration Manager performs additional operations on the services it manages, such as applying the correct
permissions when the service account is changed. Using the normal Windows Services dialog to configure any of
the SQL Server services might cause the service to malfunction.
Use SQL Server Configuration Manager for the following tasks for services:
Start, stop, and pause services
Configure services to start automatically or manually, disable the services, or change other service settings
Change the passwords for the accounts used by the SQL Server services
Start SQL Server using trace flags (command line parameters)
View the properties of services
See Also
SQL Server Services
SQL Server Network Configuration
SQL Native Client 11.0 Configuration
Choosing a Network Protocol
Configure SQL Server on Linux with the mssql-conf
tool
7/11/2018 • 14 minutes to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server (Linux only) Azure SQL Database Azure SQL Data Warehouse
Parallel Data Warehouse
mssql-conf is a configuration script that installs with SQL Server 2017 for Red Hat Enterprise Linux, SUSE Linux
Enterprise Server, and Ubuntu. You can use this utility to set the following parameters:
Database Mail Profile Set the default database mail profile for SQL Server on Linux
Default data directory Change the default directory for new SQL Server database
data files (.mdf ).
Default log directory Changes the default directory for new SQL Server database
log (.ldf ) files.
Default master database file directory Changes the default directory for the master database files on
existing SQL installation.
Default master database file name Changes the name of master database files.
Default dump directory Change the default directory for new memory dumps and
other troubleshooting files.
Default error log directory Changes the default directory for new SQL Server ErrorLog,
Default Profiler Trace, System Health Session XE, and Hekaton
Session XE files.
Default backup directory Change the default directory for new backup files.
Dump type Choose the type of dump memory dump file to collect.
TIP
Some of these settings can also be configured with environment variables. For more information, see Configure SQL Server
settings with environment variables.
Usage tips
For Always On Availability Groups and shared disk clusters, always make the same configuration changes
on each node.
For the shared disk cluster scenario, do not attempt to restart the mssql-server service to apply changes.
SQL Server is running as an application. Instead, take the resource offline and then back online.
These examples run mssql-conf by specify the full path: /opt/mssql/bin/mssql-conf. If you choose to
navigate to that path instead, run mssql-conf in the context of the current directory: ./mssql-conf.
4. The mssql-conf utility will attempt to change to the specified collation value and restart the service. If there
are any errors, it rolls back the collation to the previous value.
5. Retore your user database backups.
For a list of supported collations, run the sys.fn_helpcollations function: SELECT Name from sys.fn_helpcollations() .
IMPORTANT
You can not turn off customer feedback for free editions of SQL Server, Express and Developer.
1. Run the mssql-conf script as root with the set command for telemetry.customerfeedback. The following
example turns off customer feedback by specifying false.
For more information, see Customer Feedback for SQL Server on Linux and the SQL Server Privacy Statement.
2. Change the owner and group of the directory to the mssql user:
3. Use mssql-conf to change the default data directory with the set command:
5. Now all the database files for the new databases created will be stored in this new location. If you would like
to change the location of the log (.ldf ) files of the new databases, you can use the following "set" command:
sudo /opt/mssql/bin/mssql-conf set filelocation.defaultlogdir /tmp/log
6. This command also assumes that a /tmp/log directory exists, and that it is under the user and group mssql.
2. Change the owner and group of the directory to the mssql user:
3. Use mssql-conf to change the default master database directory for the master data and log files with the
set command:
NOTE
If SQL Server cannot find master.mdf and mastlog.ldf files in the specified directory, a templated copy of the system
databases will be automatically created in the specified directory, and SQL Server will successfully start up. However,
metadata such as user databases, server logins, server certificates, encryption keys, SQL agent jobs, or old SA login password
will not be updated in the new master database. You will have to stop SQL Server and move your old master.mdf and
mastlog.ldf to the new specified location and start SQL Server to continue using the existing metadata.
2. Use mssql-conf to change the expected master database names for the master data and log files with the
set command:
3. Change the name of the master database data and log files
2. Change the owner and group of the directory to the mssql user:
3. Use mssql-conf to change the default data directory with the set command:
2. Change the owner and group of the directory to the mssql user:
3. Use mssql-conf to change the default errorlog filename with the set command:
2. Change the owner and group of the directory to the mssql user:
3. Use mssql-conf to change the default backup directory with the "set" command:
Default: false
2. Specify the type of dump file with the coredump.coredumptype setting.
Default: miniplus
The following table lists the possible coredump.coredumptype values.
TYPE DESCRIPTION
mini Mini is the smallest dump file type. It uses the Linux
system information to determine threads and modules in
the process. The dump contains only the host
environment thread stacks and modules. It does not
contain indirect memory references or globals.
High Availability
The hadr.hadrenabled option enables availability groups on your SQL Server instance. The following command
enables availability groups by setting hadr.hadrenabled to 1. You must restart SQL Server for the setting to take
effect.
For information how this is used with availability groups, see the following two topics.
Configure Always On Availability Group for SQL Server on Linux
Configure read-scale availability group for SQL Server on Linux
2. Change the owner and group of the directory to the mssql user:
3. Run the mssql-conf script as root with the set command for
telemetry.userrequestedlocalauditdirectory:
For more information, see Customer Feedback for SQL Server on Linux.
3. When connecting to SQL Server now, you must specify the custom port with a comma (,) after the
hostname or IP address. For example, to connect with SQLCMD, you would use the following command:
OPTION DESCRIPTION
network.tlscert The absolute path to the certificate file that SQL Server uses
for TLS. Example: /etc/ssl/certs/mssql.pem The certificate
file must be accessible by the mssql account. Microsoft
recommends restricting access to the file using
chown mssql:mssql <file>; chmod 400 <file> .
network.tlskey The absolute path to the private key file that SQL Server uses
for TLS. Example: /etc/ssl/private/mssql.key The
certificate file must be accessible by the mssql account.
Microsoft recommends restricting access to the file using
chown mssql:mssql <file>; chmod 400 <file> .
network.tlsciphers Specifies which ciphers are allowed by SQL Server for TLS. This
string must be formatted per OpenSSL's cipher list format. In
general, you should not need to change this option.
By default, the following ciphers are allowed:
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-
GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-
AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-
ECDSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-
RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-
ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA:ECDHE-RSA-
AES128-SHA:AES256-GCM-SHA384:AES128-GCM-
SHA256:AES256-SHA256:AES128-SHA256:AES256-
SHA:AES128-SHA
For an example of using the TLS settings, see Encrypting Connections to SQL Server on Linux.
Enable/Disable traceflags
This traceflag option enables or disables traceflags for the startup of the SQL Server service. To enable/disable a
traceflag use the following commands:
1. Enable a traceflag using the following command. For example, for Traceflag 1234:
3. In a similar way, you can disable one or more enabled traceflags by specifying them and adding the off
parameter:
Remove a setting
To unset any setting made with mssql-conf set , call mssql-conf with the unset option and the name of the
setting. This clears the setting, effectively returning it to its default value.
1. The following example clears the network.tcpport option.
Note that any settings not shown in this file are using their default values. The next section provides a sample
mssql.conf file.
mssql.conf format
The following /var/opt/mssql/mssql.conf file provides an example for each setting. You can use this format to
manually make changes to the mssql.conf file as needed. If you do manually change the file, you must restart
SQL Server before the changes are applied. To use the mssql.conf file with Docker, you must have Docker persist
your data. First add a complete mssql.conf file to your host directory and then run the container. There is an
example of this in Customer Feedback.
[EULA]
accepteula = Y
[coredump]
captureminiandfull = true
coredumptype = full
[filelocation]
defaultbackupdir = /var/opt/mssql/data/
defaultdatadir = /var/opt/mssql/data/
defaultdumpdir = /var/opt/mssql/data/
defaultlogdir = /var/opt/mssql/data/
[hadr]
hadrenabled = 0
[language]
lcid = 1033
[memory]
memorylimitmb = 4096
[network]
forceencryption = 0
ipaddress = 10.192.0.0
kerberoskeytabfile = /var/opt/mssql/secrets/mssql.keytab
tcpport = 1401
tlscert = /etc/ssl/certs/mssql.pem
tlsciphers = ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-
RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-
AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:AES256-
GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA
tlskey = /etc/ssl/private/mssql.key
tlsprotocols = 1.2,1.1,1.0
[sqlagent]
databasemailprofile = default
errorlogfile = /var/opt/mssql/log/sqlagentlog.log
errorlogginglevel = 7
[telemetry]
customerfeedback = true
userrequestedlocalauditdirectory = /tmp/audit
[traceflag]
traceflag0 = 1204
traceflag1 = 2345
traceflag = 3456
Next steps
To instead use environment variables to make some of these configuration changes, see Configure SQL Server
settings with environment variables.
For other management tools and scenarios, see Manage SQL Server on Linux.
Install Distributed Replay - Overview
6/25/2018 • 2 minutes to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Use the following topics to install the Distributed Replay feature.
In This Section
TOPIC DESCRIPTION
Distributed Replay Requirements Procedural topic that lists the requirements for installing
Distributed Replay.
Install Distributed Replay Procedural topic that provides a typical Distributed Replay
installation by using the Setup Wizard, sample syntax and
installation parameters for running unattended Setup, and
sample syntax and installation parameters for running
Distributed Reply through a configuration file.
Complete the Post-Installation Steps Procedural topic for completing a Distributed Replay
installation.
Modify the Controller and Client Services Accounts Procedural topic for how to start and stop the Distributed
Replay controller and client services, and modify the service
accounts.
See Also
Install SQL Server 2016
dta Utility
5/3/2018 • 15 minutes to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
The dta utility is the command prompt version of Database Engine Tuning Advisor. The dta utility is designed to
allow you to use Database Engine Tuning Advisor functionality in applications and scripts.
Like Database Engine Tuning Advisor, the dta utility analyzes a workload and recommends physical design
structures to improve server performance for that workload. The workload can be a plan cache, a SQL Server
Profiler trace file or table, or a Transact-SQL script. Physical design structures include indexes, indexed views, and
partitioning. After analyzing a workload, the dta utility produces a recommendation for the physical design of
databases and can generate the necessary script to implement the recommendation. Workloads can be specified
from the command prompt with the -if or the -it argument. You can also specify an XML input file from the
command prompt with the -ix argument. In that case, the workload is specified in the XML input file.
Syntax
dta
[ -? ] |
[
[ -S server_name[ \instance ] ]
{ { -U login_id [-P password ] } | –E }
{ -D database_name [ ,...n ] }
[ -d database_name ]
[ -Tl table_list | -Tf table_list_file ]
{ -if workload_file | -it workload_trace_table_name |
-ip | -iq }
{ -ssession_name | -IDsession_ID }
[ -F ]
[ -of output_script_file_name ]
[ -or output_xml_report_file_name ]
[ -ox output_XML_file_name ]
[ -rl analysis_report_list [ ,...n ] ]
[ -ix input_XML_file_name ]
[ -A time_for_tuning_in_minutes ]
[ -n number_of_events ]
[ -I time_window_in_hours ]
[ -m minimum_improvement ]
[ -fa physical_design_structures_to_add ]
[ -fi filtered_indexes]
[ -fc columnstore_indexes]
[ -fp partitioning_strategy ]
[ -fk keep_existing_option ]
[ -fx drop_only_mode ]
[ -B storage_size ]
[ -c max_key_columns_in_index ]
[ -C max_columns_in_index ]
[ -e | -e tuning_log_name ]
[ -N online_option]
[ -q ]
[ -u ]
[ -x ]
[ -a ]
]
Arguments
-?
Displays usage information.
-A time_for_tuning_in_minutes
Specifies the tuning time limit in minutes. dta uses the specified amount of time to tune the workload and generate
a script with the recommended physical design changes. By default dta assumes a tuning time of 8 hours.
Specifying 0allows unlimited tuning time. dta might finish tuning the entire workload before the time limit expires.
However, to make sure that the entire workload is tuned, we recommend that you specify unlimited tuning time (-A
0 ).
-a
Tunes workload and applies the recommendation without prompting you.
-B storage_size
Specifies the maximum space in megabytes that can be consumed by the recommended index and partitioning.
When multiple databases are tuned, recommendations for all databases are considered for the space calculation.
By default, dta assumes the smaller of the following storage sizes:
Three times the current raw data size, which includes the total size of heaps and clustered indexes on tables
in the database.
The free space on all attached disk drives plus the raw data size.
The default storage size does not include nonclustered indexes and indexed views.
-C max_columns_in_index
Specifies the maximum number of columns in indexes that dta proposes. The maximum value is 1024. By
default, this argument is set to 16.
-c max_key_columns_in_index
Specifies the maximum number of key columns in indexes that dta proposes. The default value is 16, the
maximum value allowed. dta also considers creating indexes with included columns. Indexes recommended
with included columns may exceed the number of columns specified in this argument.
-D database_name
Specifies the name of each database that is to be tuned. The first database is the default database. You can
specify multiple databases by separating the database names with commas, for example:
Alternatively, you can specify multiple databases by using the –D argument for each database name, for example:
The -D argument is mandatory. If the -d argument has not been specified, dta initially connects to the database
that is specified with the first USE database_name clause in the workload. If there is not explicit USE database_name
clause in the workload, you must use the -d argument.
For example, if you have a workload that contains no explicit USE database_name clause, and you use the following
dta command, a recommendation will not be generated:
-d database_name
Specifies the first database to which dta connects when tuning a workload. Only one database can be specified for
this argument. For example:
If multiple database names are specified, then dta returns an error. The -d argument is optional.
If you are using an XML input file, you can specify the first database to which dta connects by using the
DatabaseToConnect element that is located under the TuningOptions element. For more information, see
Database Engine Tuning Advisor.
If you are tuning only one database, the -d argument provides functionality that is similar to the -d argument in the
sqlcmd utility, but it does not execute the USE database_name statement. For more information, see sqlcmd
Utility.
-E
Uses a trusted connection instead of requesting a password. Either the -E argument or the -U argument, which
specifies a login ID, must be used.
-e tuning_log_name
Specifies the name of the table or file where dta records events that it could not tune. The table is created on the
server where the tuning is performed.
If a table is used, specify its name in the format: [database_name].[owner_name].table_name. The following table
shows the default values for each parameter:
table_name None
NOTE
The dta utility does not delete the contents of user-specified tuning log tables if the session is deleted. When tuning very
large workloads, we recommend that a table be specified for the tuning log. Since tuning large workloads can result in large
tuning logs, the sessions can be deleted much faster when a table is used.
-F
Permits dta to overwrite an existing output file. If an output file with the same name already exists and -F is not
specified, dtareturns an error. You can use -F with -of, -or, or -ox.
-fa physical_design_structures_to_add
Specifies what types of physical design structures dta should include in the recommendation. The following table
lists and describes the values that can be specified for this argument. When no value is specified, dta uses the
default -faIDX.
VALUE DESCRIPTION
-fi
Specifies that filtered indexes be considered for new recommendations. For more information, see Create Filtered
Indexes.
-fc
Specifies that columnstore indexes be considered for new recommendations. DTA will consider both clustered and
non-clustered columnstore indexes. For more information, see
Columnstore index recommendations in Database Engine Tuning Advisor (DTA). ||
|-|
|Applies to: SQL Server 2016 (13.x) through SQL Server 2017.|
-fk keep_existing_option
Specifies what existing physical design structures dta must retain when generating its recommendation. The
following table lists and describes the values that can be specified for this argument:
VALUE DESCRIPTION
-fp partitioning_strategy
Specifies whether new physical design structures (indexes and indexed views) that dta proposes should be
partitioned, and how they should be partitioned. The following table lists and describes the values that can be
specified for this argument:
VALUE DESCRIPTION
NONE No partitioning
ALIGNED means that in the recommendation generated by dta every proposed index is partitioned in exactly the
same way as the underlying table for which the index is defined. Nonclustered indexes on an indexed view are
aligned with the indexed view. Only one value can be specified for this argument. The default is -fpNONE.
-fx drop_only_mode
Specifies that dta only considers dropping existing physical design structures. No new physical design structures
are considered. When this option is specified, dta evaluates the usefulness of existing physical design structures
and recommends dropping seldom used structures. This argument takes no values. It cannot be used with the -fa, -
fp, or -fk ALL arguments
-ID session_ID
Specifies a numerical identifier for the tuning session. If not specified, then dta generates an ID number. You can
use this identifier to view information for existing tuning sessions. If you do not specify a value for -ID, then a
session name must be specified with -s.
-ip
Specifies that the plan cache be used as the workload. The top 1,000 plan cache events for explicitly selected
databases are analyzed. This value can be changed using the –n option.
-iq
Specifies that the Query Store be used as the workload. The top 1,000 events from the Query Store for explicitly
selected databases are analyzed. This value can be changed using the –n option. See Query Store and Tuning
Database Using Workload from Query Store for more information. ||
|-|
|Applies to: SQL Server 2016 (13.x) through SQL Server 2017.|
-if workload_file
Specifies the path and name of the workload file to use as input for tuning. The file must be in one of these
formats: .trc (SQL Server Profiler trace file), .sql (SQL file), or .log ( SQL Server trace file). Either one workload file
or one workload table must be specified.
-it workload_trace_table_name
Specifies the name of a table containing the workload trace for tuning. The name is specified in the format:
[database_name].[owner_name].table_name.
The following table shows the default values for each:
owner_name dbo.
table_name None.
NOTE
owner_name must be dbo. If any other value is specified, execution of dta fails and an error is returned. Also note that either
one workload table or one workload file must be specified.
-ix input_XML_file_name
Specifies the name of the XML file containing dta input information. This must be a valid XML document
conforming to DTASchema.xsd. Conflicting arguments specified from the command prompt for tuning options
override the corresponding value in this XML file. The only exception is if a user-specified configuration is entered
in the evaluate mode in the XML input file. For example, if a configuration is entered in the Configuration element
of the XML input file and the EvaluateConfiguration element is also specified as one of the tuning options, the
tuning options specified in the XML input file will override any tuning options entered from the command prompt.
-m minimum_improvement
Specifies the minimum percentage of improvement that the recommended configuration must satisfy.
-N online_option
Specifies whether physical design structures are created online. The following table lists and describes the values
you can specify for this argument:
VALUE DESCRIPTION
dta -n number_of_events -A 0
In this case, it is important to specify an unlimited tuning time ( -A 0 ). Otherwise, Database Engine Tuning Advisor
assumes an 8 hour tuning time by default.
-I time_window_in_hours
Specifies the time window (in hours) when a query must have executed for it to be considered by DTA for tuning
when using -iq option (Workload from Query Store).
dta -iq -I 48
In this case, DTA will use Query Store as the source of workload and only consider queries that have executed with
the past 48 hours.
||
|-|
|Applies to: SQL Server 2016 (13.x) through SQL Server 2017.|
-of output_script_file_name
Specifies that dta writes the recommendation as a Transact-SQL script to the file name and destination specified.
You can use -F with this option. Make sure that the file name is unique, especially if you are also using -or and -ox.
-or output_xml_report_file_name
Specifies that dta writes the recommendation to an output report in XML. If a file name is supplied, then the
recommendations are written to that destination. Otherwise, dta uses the session name to generate the file name
and writes it to the current directory.
You can use -F with this option. Make sure that the file name is unique, especially if you are also using -of and -ox.
-ox output_XML_file_name
Specifies that dta writes the recommendation as an XML file to the file name and destination supplied. Ensure that
Database Engine Tuning Advisor has permissions to write to the destination directory.
You can use -F with this option. Make sure that the file name is unique, especially if you are also using -of and -or.
-P password
Specifies the password for the login ID. If this option is not used, dta prompts for a password.
-q
Sets quiet mode. No information is written to the console, including progress and header information.
-rl analysis_report_list
Specifies the list of analysis reports to generate. The following table lists the values that can be specified for this
argument:
VALUE REPORT
Specify multiple reports by separating the values with commas, for example:
-S server_name[ \instance]
Specifies the name of the computer and instance of SQL Server to connect to. If no server_name is specified, dta
connects to the default instance of SQL Server on the local computer. This option is required when connecting to a
named instance or when executing dta from a remote computer on the network.
-s session_name
Specifies the name of the tuning session. This is required if -ID is not specified.
-Tf table_list_file
Specifies the name of a file containing a list of tables to be tuned. Each table listed within the file should begin on a
new line. Table names should be qualified with three-part naming, for example,
AdventureWorks2012.HumanResources.Department. Optionally, to invoke the table-scaling feature, the name
of an existing table can be followed by a number indicating the projected number of rows in the table. Database
Engine Tuning Advisor takes into consideration the projected number of rows while tuning or evaluating
statements in the workload that reference these tables. Note that there can be one or more spaces between the
number_of_rows count and the table_name.
This is the file format for table_list_file:
database_name.[schema_name].table_name [number_of_rows]
database_name.[schema_name].table_name [number_of_rows]
database_name.[schema_name].table_name [number_of_rows]
This argument is an alternative to entering a table list at the command prompt (-Tl). Do not use a table list file (-Tf)
if you are using -Tl. If both arguments are used, dta fails and returns an error.
If the -Tf and -Tl arguments are omitted, all user tables in the specified databases are considered for tuning.
-Tl table_list
Specifies at the command prompt a list of tables to be tuned. Place commas between table names to separate
them. If only one database is specified with the -D argument, then table names do not need to be qualified with a
database name. Otherwise, the fully qualified name in the format: database_name.schema_name.table_name is
required for each table.
This argument is an alternative to using a table list file (-Tf). If both -Tl and -Tf are used, dta fails and returns an
error.
-U login_id
Specifies the login ID used to connect to SQL Server.
-u
Launches the Database Engine Tuning Advisor GUI. All parameters are treated as the initial settings for the user
interface.
-x
Starts tuning session and exits.
Remarks
Press CTRL+C once to stop the tuning session and generate recommendations based on the analysis dta has
completed up to this point. You will be prompted to decide whether you want to generate recommendations or not.
Press CTRL+C again to stop the tuning session without generating recommendations.
Examples
A. Tune a workload that includes indexes and indexed views in its recommendation
This example uses a secure connection ( -E ) to connect to the tpcd1G database on MyServer to analyze a
workload and create recommendations. It writes the output to a script file named script.sql. If script.sql already
exists, then dta will overwrite the file because the -F argument has been specified. The tuning session runs for an
unlimited length of time to ensure a complete analysis of the workload ( -A 0 ). The recommendation must provide
a minimum improvement of 5% ( -m 5 ). dta should include indexes and indexed views in its final recommendation
( -fa IDX_IV ).
AdventureWorks2012.Sales.Customer 100000
AdventureWorks2012.Sales.Store
AdventureWorks2012.Production.Product 2000000
See Also
Command Prompt Utility Reference (Database Engine)
Database Engine Tuning Advisor
SQL Server Profiler
5/3/2018 • 9 minutes to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
SQL Server Profiler is an interface to create and manage traces and analyze and replay trace results. Events are
saved in a trace file that can later be analyzed or used to replay a specific series of steps when trying to diagnose a
problem.
IMPORTANT!!
We are announcing the deprecation of SQL Server Profiler for Database Engine Trace Capture and Trace
Replay. These features are available in SQL Server 2016 but will be removed in a later version.
The Microsoft.SqlServer.Management.Trace namespace that contains the Microsoft SQL Server Trace and
Replay objects will also be deprecated.
Note that SQL Server Profiler for the Analysis Services workloads is NOT being deprecated, and will continue
to be supported.
Submit your feedback and questions on our Connect page.
Trace Capture Extended Events graphical user interface SQL Server Profiler
in SQL Server Management Studio
NOTE! Understanding SQL Trace really helps when working with SQL Server Profiler. For more information,
see SQL Trace.
Event
An event is an action generated within an instance of SQL Server Database Engine. Examples of these are:
Login connections, failures, and disconnections.
Transact-SQL SELECT, INSERT, UPDATE, and DELETE statements.
Remote procedure call (RPC ) batch status.
The start or end of a stored procedure.
The start or end of statements within stored procedures.
The start or end of an SQL batch.
An error written to the SQL Server error log.
A lock acquired or released on a database object.
An opened cursor.
Security permission checks.
All of the data generated by an event is displayed in the trace in a single row. This row is intersected by data
columns that describe the event in detail.
EventClass
An event class is a type of event that can be traced. The event class contains all of the data that can be
reported by an event. Examples of event classes are the following:
SQL:BatchCompleted
Audit Login
Audit Logout
Lock:Acquired
Lock:Released
EventCategory
An event category defines the way events are grouped within SQL Server Profiler. For example, all lock
events classes are grouped within the Locks event category. However, event categories only exist within
SQL Server Profiler. This term does not reflect the way Engine events are grouped.
DataColumn
A data column is an attribute of an event classes captured in the trace. Because the event class determines
the type of data that can be collected, not all data columns are applicable to all event classes. For example, in
a trace that captures the Lock:Acquired event class, the BinaryData data column contains the value of the
locked page ID or row, but the Integer Data data column does not contain any value because it is not
applicable to the event class being captured.
Template
A template defines the default configuration for a trace. Specifically, it includes the event classes you want to
monitor with SQL Server Profiler. For example, you can create a template that specifies the events, data
columns, and filters to use. A template is not executed, but rather is saved as a file with a .tdf extension. Once
saved, the template controls the trace data that is captured when a trace based on the template is launched.
Trace
A trace captures data based on selected event classes, data columns, and filters. For example, you can create
a trace to monitor exception errors. To do this, you select the Exception event class and the Error, State,
and Severity data columns. Data from these three columns needs to be collected in order for the trace
results to provide meaningful data. You can then run a trace, configured in such a manner, and collect data
on any Exception events that occur in the server. Trace data can be saved, or used immediately for analysis.
Traces can be replayed at a later date, although certain events, such as Exception events, are never
replayed. You can also save the trace as a template to build similar traces in the future.
SQL Server provides two ways to trace an instance of SQL Server: you can trace with SQL Server Profiler,
or you can trace using system stored procedures.
Filter
When you create a trace or template, you can define criteria to filter the data collected by the event. To keep
traces from becoming too large, you can filter them so that only a subset of the event data is collected. For
example, you can limit the Microsoft Windows user names in the trace to specific users, thereby reducing
the output data.
If a filter is not set, all events of the selected event classes are returned in the trace output.
Lists the predefined templates that SQL Server provides for SQL Server Profiler Templates and Permissions
monitoring certain types of events, and the permissions
required to use to replay traces.
Describes how to run SQL Server Profiler. Permissions Required to Run SQL Server Profiler
Describes how to specify events and data columns for a trace Specify Events and Data Columns for a Trace File (SQL Server
file. Profiler)
Describes how to save trace results to a file. Save Trace Results to a File (SQL Server Profiler)
Describes how to save trace results to a table. Save Trace Results to a Table (SQL Server Profiler)
Describes how to filter events in a trace. Filter Events in a Trace (SQL Server Profiler)
Describes how to view filter information. View Filter Information (SQL Server Profiler)
TASK DESCRIPTION TOPIC
Describes how to Set a Maximum File Size for a Trace File (SQL Set a Maximum File Size for a Trace File (SQL Server Profiler)
Server Profiler).
Describes how to set a maximum table size for a trace table. Set a Maximum Table Size for a Trace Table (SQL Server Profiler)
Describes how to start a trace automatically after connecting Start a Trace Automatically after Connecting to a Server (SQL
to a server. Server Profiler)
Describes how to filter events based on the event start time. Filter Events Based on the Event Start Time (SQL Server
Profiler)
Describes how to filter events based on the event end time. Filter Events Based on the Event End Time (SQL Server Profiler)
Describes how to filter server process IDs (SPIDs) in a trace. Filter Server Process IDs (SPIDs) in a Trace (SQL Server Profiler)
Describes how to run a trace after it has been paused or Run a Trace After It Has Been Paused or Stopped (SQL Server
stopped. Profiler)
Describes how to clear a trace window. Clear a Trace Window (SQL Server Profiler)
Describes how to close a trace window. Close a Trace Window (SQL Server Profiler)
Describes how to set trace definition defaults. Set Trace Definition Defaults (SQL Server Profiler)
Describes how to set trace display defaults. Set Trace Display Defaults (SQL Server Profiler)
Describes how to open a trace file. Open a Trace File (SQL Server Profiler)
Describes how to open a trace table. Open a Trace Table (SQL Server Profiler)
Describes how to replay a trace table. Replay a Trace Table (SQL Server Profiler)
Describes how to replay a trace file. Replay a Trace File (SQL Server Profiler)
Describes how to replay a single event at a time. Replay a Single Event at a Time (SQL Server Profiler)
Describes how to replay a Transact-SQL script. Replay a Transact-SQL Script (SQL Server Profiler)
Describes how to create a trace template. Create a Trace Template (SQL Server Profiler)
TASK DESCRIPTION TOPIC
Describes how to modify a trace template. Modify a Trace Template (SQL Server Profiler)
Describes how to set global trace options. Set Global Trace Options (SQL Server Profiler)
Describes how to find a value or data column while tracing. Find a Value or Data Column While Tracing (SQL Server
Profiler)
Describes how to derive a template from a running trace. Derive a Template from a Running Trace (SQL Server Profiler)
Describes how to derive a template from a trace file or trace Derive a Template from a Trace File or Trace Table (SQL Server
table. Profiler)
Describes how to create a Transact-SQL script for running a Create a Transact-SQL Script for Running a Trace (SQL Server
trace. Profiler)
Describes how to export a trace template. Export a Trace Template (SQL Server Profiler)
Describes how to import a trace template. Import a Trace Template (SQL Server Profiler)
Describes how to extract a script from a trace. Extract a Script from a Trace (SQL Server Profiler)
Describes how to correlate a trace with Windows performance Correlate a Trace with Windows Performance Log Data (SQL
log data. Server Profiler)
Describes how to organize columns displayed in a trace. Organize Columns Displayed in a Trace (SQL Server Profiler)
Describes how to start SQL Server Profiler. Start SQL Server Profiler
Describes how to save traces and trace templates. Save Traces and Trace Templates
Describes how to correlate a trace with Windows performance Correlate a Trace with Windows Performance Log Data
log data.
Describes how to view and analyze traces with SQL Server View and Analyze Traces with SQL Server Profiler
Profiler.
Describes how to analyze deadlocks with SQL Server Profiler. Analyze Deadlocks with SQL Server Profiler
Describes how to analyze queries with SHOWPLAN results in Analyze Queries with SHOWPLAN Results in SQL Server
SQL Server Profiler. Profiler
Describes how to filter traces with SQL Server Profiler. Filter Traces with SQL Server Profiler
Describes how to use the replay features of SQL Server Replay Traces
Profiler.
Lists the context-sensitive help topics for SQL Server Profiler. SQL Server Profiler F1 Help
Lists the system stored procedures that are used by SQL SQL Server Profiler Stored Procedures (Transact-SQL)
Server Profiler to monitor performance and activity.
See also
Locks Event Category
Sessions Event Category
Stored Procedures Event Category
TSQL Event Category
Server Performance and Activity Monitoring
ssbdiagnose Utility (Service Broker)
5/3/2018 • 18 minutes to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
The ssbdiagnose utility reports issues in Service Broker conversations or the configuration of Service Broker
services. Configuration checks can be made for either two services or a single service. Issues are reported either in
the command prompt window as human-readable text, or as formatted XML that can be redirected to a file or
another program.
Syntax
ssbdiagnose
[ [ -XML ]
[ -LEVEL { ERROR | WARNING | INFO } ]
[-IGNORE error_id ] [ ...n]
[ <baseconnectionoptions> ]
{ <configurationreport> | <runtimereport> }
]
| -?
<configurationreport> ::=
CONFIGURATION
{ [ FROM SERVICE service_name
[ <fromconnectionoptions> ]
[ MIRROR <mirrorconnectionoptions> ]
]
[ TO SERVICE service_name[, broker_id ]
[ <toconnectionoptions> ]
[ MIRROR <mirrorconnectionoptions> ]
]
}
ON CONTRACT contract_name
[ ENCRYPTION { ON | OFF | ANONYMOUS } ]
<runtime_report> ::=
RUNTIME
[-SHOWEVENTS ]
[ -NEW
[ -ID { conversation_handle
| conversation_group_id
| conversation_id
}
] [ ...n]
]
[ -TIMEOUT timeout_interval ]
[ <runtimeconnectionoptions> ]
<baseconnectionoptions> ::=
<connectionoptions>
<fromconnectionoptions> ::=
<connectionoptions>
<toconnectionoptions> ::=
<connectionoptions>
<mirrorconnectionoptions> ::=
<connectionoptions>
<runtimeconnectionoptions> ::=
[ CONNECT TO <connectionoptions> ] [ ...n]
<connectionoptions> ::=
[ –E | { -U login_id [ -P password ] } ]
[ -S server_name[\instance_name] ]
[ -d database_name ]
[ -l login_timeout ]
SELECT service_broker_guid
FROM sys.databases
WHERE database_id = DB_ID();
<toconnectionoptions>
Specifies the information that is required to connect the database that holds the target service. If
toconnectionoptions is not specified, ssbdiagnose uses the connection information from
baseconnectionoptions to connect to the target database.
MIRROR
Specifies that the associated Service Broker service is hosted in a mirrored database. ssbdiagnose verifies that the
route to the service is a mirrored route, where MIRROR_ADDRESS was specified on CREATE ROUTE.
<mirrorconnectionoptions>
Specifies the information that is required to connect to the mirror database. If mirrorconnectionoptions is not
specified, ssbdiagnose uses the connection information from baseconnectionoptions to connect to the mirror
database.
ON CONTRACT contract_name
Requests that ssbdiagnose only check configurations that use the specified contract. If ON CONTRACT is not
specified, ssbdiagnose reports on the contract named DEFAULT.
ENCRYPTION { ON | OFF | ANONYMOUS }
Requests verification that the dialog is correctly configured for the specified level of encryption:
ON: Default setting. Full dialog security is configured. Certificates have been deployed on both sides of the dialog,
a remote service binding is present, and the GRANT SEND statement for the target service specified the initiator
user.
OFF: No dialog security is configured. No certificates have been deployed, no remote service binding was created,
and the GRANT SEND for the initiator service specified the public role.
ANONYMOUS: Anonymous dialog security is configured. One certificate has been deployed, the remote service
binding specified the anonymous clause, and the GRANT SEND for the target service specified the public role.
RUNTIME
Requests a report of issues that cause runtime errors for a Service Broker conversation. If neither -NEW or -ID are
specified, ssbdiagnose monitors all conversations in all databases specified in the connection options. If -NEW or
-ID are specified, ssbdiagnose builds a list of the IDs specified in the parameters.
While ssbdiagnose is running, it records all SQL Server Profiler events that indicate runtime errors. It records the
events that occur for the specified IDs, plus system-level events. If runtime errors are encountered, ssbdiagnose
runs a configuration report on the associated configuration.
By default, runtime errors are not included in the output report, only the results of the configuration analysis. Use -
SHOWEVENTS to have the runtime errors included in the report.
-SHOWEVENTS
Specifies that ssbdiagnose report SQL Server Profiler events during a RUNTIME report. Only events that are
considered error conditions are reported. By default, ssbdiagnose only monitors error events; it does not report
them in the output.
-NEW
Requests runtime monitoring of the first conversation that begins after ssbdiagnose starts running.
-ID
Requests runtime monitoring of the specified conversation elements. You can specify -ID multiple times.
If you specify a conversation handle, only events associated with the associated conversation endpoint are
reported. If you specify a conversation ID, all events for that conversation and its initiator and target endpoints are
reported. If a conversation group ID is specified, all events for all conversations and endpoints in the conversation
group are reported.
conversation_handle
A unique identifier that identifies a conversation endpoint in an application. Conversation handles are unique to
one endpoint of a conversation, the initiator and target endpoints have separate conversation handles.
Conversation handles are returned to applications by the @dialog_handle parameter of the BEGIN DIALOG
statement, and the conversation_handle column in the result set of a RECEIVE statement.
Conversation handles are reported in the conversation_handle column of the sys.transmission_queue and
sys.conversation_endpoints catalog views.
conversation_group_id
The unique identifier that identifies a conversation group.
Conversation group IDs are returned to applications by the @conversation_group_id parameter of the GET
CONVERSATION GROUP statement and the conversation_group_id column in the result set of a RECEIVE
statement.
Conversation group IDs are reported in the conversation_group_id columns of the sys.conversation_groups
and sys.conversation_endpoints catalog views.
conversation_id
The unique identifier that identifies a conversation. Conversation IDs are the same for both the initiator and target
endpoints of a conversation.
Conversation IDs are reported in the conversation_id column of the sys.conversation_endpoints catalog view.
-TIMEOUT timeout_interval
Specifies the number of seconds for a RUNTIME report to run. If -TIMEOUT is not specified the runtime report
runs indefinitely. -TIMEOUT is used only on RUNTIME reports, not CONFIGURATION reports. Use ctrl + C to
quit ssbdiagnose if -TIMEOUT was not specified or to end a runtime report before the time-out interval expires.
timeout_interval must be a number between 1 and 2,147,483,647.
<runtimeconnectionoptions>
Specifies the connection information for the databases that contain the services associated with conversation
elements being monitored. If all the services are in the same database, you only have to specify one CONNECT
TO clause. If the services are in separate databases you must supply a CONNECT TO clause for each database. If
runtimeconnectionoptions is not specified, ssbdiagnose uses the connection information from
baseconnectionoptions.
–E
Open a Windows Authentication connection to an instance of the Database Engine by using your current Windows
account as the login ID. The login must be a member of the sysadmin fixed-server role.
The -E option ignores the user and password settings of the SQLCMDUSER and SQLCMDPASSWORD
environment variables.
If neither -E nor -U is specified, ssbdiagnose uses the value from the SQLCMDUSER environment variable. If
SQLCMDUSER is not set either, ssbdiagnose uses Windows Authentication.
If the -E option is used together with the -U option or the -P option, an error message is generated.
-U login_id
Open a SQL Server Authentication connection by using the specified login ID. The login must be a member of the
sysadmin fixed-server role.
If neither -E nor -U is specified, ssbdiagnose uses the value from the SQLCMDUSER environment variable. If
SQLCMDUSER is not set either, ssbdiagnose tries to connect by using Windows Authentication mode based on
the Windows account of the user who is running ssbdiagnose.
If the -U option is used together with the -E option, an error message is generated. If the –U option is followed by
more than one argument, an error message is generated and the program exits.
-P password
Specifies the password for the -U login ID. Passwords are case sensitive. If the -U option is used and the -P option
is not used, ssbdiagnose uses the value from the SQLCMDPASSWORD environment variable. If
SQLCMDPASSWORD is not set either, ssbdiagnose prompts the user for a password.
IMPORTANT
When you type a SET SQLCMDPASSWORD command, your password will be visible to anyone who can see your monitor.
If the -P option is specified without a password ssbdiagnose uses the default password (NULL ).
IMPORTANT
Do not use a blank password. Use a strong password. For more information, see Strong Passwords.
The password prompt is displayed by printing the password prompt to the console, as follows: Password:
User input is hidden. This means that nothing is displayed and the cursor stays in position.
If the -P option is used with the -E option, an error message is generated.
If the -P option is followed by more than one argument, an error message is generated.
-S server_name[\instance_name]
Specifies the instance of the Database Engine that holds the Service Broker services to be analyzed.
Specify server_name to connect to the default instance of the Database Engine on that server. Specify
server_name\instance_name to connect to a named instance of the Database Engine on that server. If -S is not
specified, ssbdiagnose uses the value of the SQLCMDSERVER environment variable. If SQLCMDSERVER is not
set either, ssbdiagnose connects to the default instance of the Database Engine on the local computer.
-d database_name
Specifies the database that holds the Service Broker services to be analyzed. If the database does not exist, an error
message is generated. If -d is not specified, the default is the database specified in the default-database property
for your login.
-l login_timeout
Specifies the number of seconds before an attempt to connect to a server times out. If -l is not specified,
ssbdiagnose uses the value set for the SQLCMDLOGINTIMEOUT environment variable. If
SQLCMDLOGINTIMEOUT is not set either, the default time-out is thirty seconds. The login time-out must be a
number between 0 and 65534. If the value that is supplied is not numeric or does not fall into that range,
ssbdiagnose generates an error message. A value of 0 specifies time-out to be infinite.
-?
Displays command line help.
Remarks
Use ssbdiagnose to do the following:
Confirm that there are no configuration errors in a newly configured Service Broker application.
Confirm that there are no configuration errors after changing the configuration of an existing Service Broker
application.
Confirm that there are no configuration errors after a Service Broker database is detached and then
reattached to a new instance of the Database Engine.
Research whether there are configuration errors when messages are not successfully transmitted between
services.
Get a report of any errors that occur in a set of Service Broker conversation elements.
Configuration Reporting
To correctly analyze the configuration used by a conversation, run a ssbdiagnose configuration report that uses
the same options that are used by the conversation. If you specify a lower level of options for ssbdiagnose than
are used by the conversation, ssbdiagnose might not report conditions that are required by the conversation. If
you specify a higher level of options for ssbdiagnose, it might report items that are not required by the
conversation. For example, a conversation between two services in the same database can be run with
ENCPRYPTION OFF. If you run ssbdiagnose to validate the configuration between the two services, but use the
default ENCRYPTION ON setting, ssbdiagnose reports that the database is missing a master key. A master key is
not required for the conversation.
The ssbdiagnose configuration report analyzes only one Service Broker service or a single pair of services every
time it is run. To report on multiple pairs of Service Broker services, build a .cmd command file that calls
ssbdiagnose multiple times.
Runtime Reporting
When -RUNTIME is specified, ssbdiagnose searches all databases specified in runtimeconnectionoptions and
baseconnectionoptions to build a list of Service Broker IDs. The full list of IDs built depends on what is specified
for -NEW and -ID:
If neither -NEW or -ID are specified, the list includes all conversations for all databases specified in the
connection options.
If -NEW is specified, ssbdiagnose includes the elements for the first conversation that starts after
ssbdiagnose is run. This includes the conversation ID and the conversation handles for both the target and
initiator conversation endpoints.
If -ID is specified with a conversation handle, only that handle is included in the list.
If -ID is specified with a conversation ID, the conversation ID and the handles for both of its conversation
endpoints are added to the list.
If -ID is specified with a conversation group ID, all the conversation IDs and conversation handles in that
group are added to the list.
The list does not include elements from databases that are not covered by the connection options. For
example, assume that you use -ID to specify a conversation ID, but only provide a
runtimeconnectionoptions clause for the initiator database and not the target database. ssbdiagnose will
not include the target conversation handle in its list of IDs, only the conversation ID and the initiator
conversation handle.
ssbdiagnose monitors the SQL Server Profiler events from the databases covered by
runtimeconnectionoptions and baseconnectionoptions. It searches for Service Broker events that
indicate an error was encountered by one or more of the Service Broker IDs in the runtime list.
ssbdiagnose also searches for system-level Service Broker error events not specifically associated with any
conversation group.
If ssbdiagnose finds conversation errors, the utility will attempt to report on the root cause of the events by
also running a configuration report. ssbdiagnose uses the metadata in the databases to try to determine
the instances, Service Broker IDs, databases, services, and contracts used by the conversation. It then runs a
configuration report using all available information.
By default, ssbdiagnose does not report error events. It only reports the underlying issues found during the
configuration check. This minimizes the amount of information reported and helps you focus on the
underlying configuration issues. You can specify -SHOWEVENTS to see the error events encountered by
ssbdiagnose.
Permissions
In each connectionoptions clause, the login specified with either -E or -U must be a member of the sysadmin
fixed-server role in the instance specified in -S.
Examples
This section contains examples of using ssbdiagnose at a command prompt.
A. Checking the Configuration of Two Services in the Same Database
The following example shows how to request a configuration report when the following are true;
The initiator and target service are in the same database.
The database is in the default instance of the Database Engine.
The instances is on the same computer on which ssbdiagnose is run.
The ssbdiagnose utility reports the configuration that uses the DEFAULT contract because ON
CONTRACT is not specified.
B. Checking the Configuration of Two Services on Separate Computers That Use One Login
The following example shows how to request a configuration report when the initiator and target service are on
separate computers, but can be accessed by using the same Windows Authentication login.
C. Checking the Configuration of Two Services on Separate Computers That Use Separate Logins
The following example shows how to request a configuration report when the initiator and target service are on
separate computers, and separate SQL Server Authentication logins are required for each instance of the Database
Engine.
F. Monitor the status of a specific conversation on the local computer with a time out
The following example shows how to monitor a specific conversation where the initiator and target services are in
the same database in the default instance of the same computer that is running ssbdiagnose. The time-out
interval is set to 20 seconds.
See Also
SQL Server Service Broker
BEGIN DIALOG CONVERSATION (Transact-SQL )
CREATE BROKER PRIORITY (Transact-SQL )
CREATE CERTIFICATE (Transact-SQL )
CREATE CONTRACT (Transact-SQL )
CREATE ENDPOINT (Transact-SQL )
CREATE MASTER KEY (Transact-SQL )
CREATE MESSAGE TYPE (Transact-SQL )
CREATE QUEUE (Transact-SQL )
CREATE REMOTE SERVICE BINDING (Transact-SQL )
CREATE ROUTE (Transact-SQL )
CREATE SERVICE (Transact-SQL )
RECEIVE (Transact-SQL )
sys.transmission_queue (Transact-SQL )
sys.conversation_endpoints (Transact-SQL )
sys.conversation_groups (Transact-SQL )
SQL Command Prompt Utilities (Database Engine)
5/3/2018 • 2 minutes to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Command prompt utilities enable you to script SQL Server operations. The following table contains a list of
command prompt utilities that ship with SQL Server.
bcp Utility Used to copy data between an instance <drive:>\Program Files\ Microsoft SQL
of Microsoft SQL Server and a data file Server\Client
in a user-specified format. SDK\ODBC\110\Tools\Binn
Deploy Model Solutions with the Used to deploy Analysis Services <drive>:\Program Files\Microsoft SQL
Deployment Utility projects to instances of Analysis Server\nnn\Tools\Binn\VShell\Common
Services. 7\IDE
mssql-scripter (Public Preview) Used to generate CREATE and INSERT See our GitHub repo for download and
T-SQL scripts for database objects in usage information.
SQL Server, Azure SQL Database, and
Azure SQL Data Warehouse.
Profiler Utility Used to start SQL Server Profiler from a <drive>:\Program Files\Microsoft SQL
command prompt. Server\nnn\Tools\Binn
RS.exe Utility (SSRS) Used to run scripts designed for <drive>:\Program Files\Microsoft SQL
managing Reporting Services report Server\nnn\Tools\Binn
servers.
rsconfig Utility (SSRS) Used to configure a report server <drive>:\Program Files\Microsoft SQL
connection. Server\nnn\Tools\Binn
rskeymgmt Utility (SSRS) Used to manage encryption keys on a <drive>:\Program Files\Microsoft SQL
report server. Server\nnn\Tools\Binn
UTILITY DESCRIPTION INSTALLED IN
sqlagent90 Application Used to start SQL Server Agent from a <drive>:\Program Files\Microsoft SQL
command prompt. Server\<instance_name>\MSSQL\Binn
sqlcmd Utility Allows you to enter Transact-SQL <drive:>\Program Files\ Microsoft SQL
statements, system procedures, and Server\Client
script files at the command prompt. SDK\ODBC\110\Tools\Binn
sqlps Utility Used to run PowerShell commands and <drive>:\Program Files\Microsoft SQL
scripts. Loads and registers the SQL Server\nnn\Tools\Binn
Server PowerShell provider and cmdlets.
sqlservr Application Used to start and stop an instance of <drive>:\Program Files\Microsoft SQL
Database Engine from the command Server\MSSQL13.MSSQLSERVER\MSSQ
prompt for troubleshooting. L\Binn
Ssms Utility Used to start SQL Server Management <drive>:\Program Files\Microsoft SQL
Studio from a command prompt. Server\nnn\Tools\Binn\VSShell\Commo
n7\IDE
tablediff Utility Used to compare the data in two tables <drive>:\Program Files\Microsoft SQL
for non-convergence, which is useful Server\nnn\COM
when troubleshooting a replication
topology.
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
For content related to previous versions of SQL Server, see bcp Utility.
For the latest version of the bcp utility, see Microsoft Command Line Utilities 14.0 for SQL Server
For using bcp on Linux, see Install sqlcmd and bcp on Linux.
For detailed information about using bcp with Azure SQL Data Warehouse, see Load data with bcp.
The bulk copy program utility (bcp) bulk copies data between an instance of Microsoft SQL Server and a data file
in a user-specified format. The bcp utility can be used to import large numbers of new rows into SQL Server
tables or to export data out of tables into data files. Except when used with the queryout option, the utility
requires no knowledge of Transact-SQL. To import data into a table, you must either use a format file created for
that table or understand the structure of the table and the types of data that are valid for its columns.
For the syntax conventions that are used for the bcp syntax, see Transact-SQL Syntax Conventions (Transact-
SQL ).
NOTE
If you use bcp to back up your data, create a format file to record the data format. bcp data files do not include any
schema or format information, so if a table or view is dropped and you do not have a format file, you may be unable to
import the data.
SYNTAX
bcp [database_name.] schema.{table_name | view_name | "query"
{in data_file | out data_file | queryout data_file | format nul}
[-a packet_size]
[-b batch_size]
[-c]
[-C { ACP | OEM | RAW | code_page } ]
[-d database_name]
[-e err_file]
[-E]
[-f format_file]
[-F first_row]
[-G Azure Active Directory Authentication]
[-h"hint [,...n]"]
[-i input_file]
[-k]
[-K application_intent]
[-L last_row]
[-m max_errors]
[-n]
[-N]
[-o output_file]
[-P password]
[-q]
[-r row_term]
[-R]
[-S [server_name[\instance_name]]
[-t field_term]
[-T]
[-U login_id]
[-v]
[-V (80 | 90 | 100 | 110 | 120 | 130 ) ]
[-w]
[-x]
Arguments
data_file
Is the full path of the data file. When data is bulk imported into SQL Server, the data file contains the data to be
copied into the specified table or view. When data is bulk exported from SQL Server, the data file contains the data
copied from the table or view. The path can have from 1 through 255 characters. The data file can contain a
maximum of 2^63 - 1 rows.
database_name
Is the name of the database in which the specified table or view resides. If not specified, this is the default database
for the user.
You can also explicitly specify the database name with d-.
in data_file | out data_file | queryout data_file | format nul
Specifies the direction of the bulk copy, as follows:
in copies from a file into the database table or view.
out copies from the database table or view to a file. If you specify an existing file, the file is overwritten.
When extracting data, note that the bcp utility represents an empty string as a null and a null string as an
empty string.
queryout copies from a query and must be specified only when bulk copying data from a query.
format creates a format file based on the option specified (-n, -c, -w, or -N ) and the table or view
delimiters. When bulk copying data, the bcp command can refer to a format file, which saves you from re-
entering format information interactively. The format option requires the -f option; creating an XML format
file, also requires the -x option. For more information, see Create a Format File (SQL Server). You must
specify nul as the value (format nul).
owner
Is the name of the owner of the table or view. owner is optional if the user performing the operation owns
the specified table or view. If owner is not specified and the user performing the operation does not own the
specified table or view, SQL Server returns an error message, and the operation is canceled.
" query " Is a Transact-SQL query that returns a result set. If the query returns multiple result sets, only the first
result set is copied to the data file; subsequent result sets are ignored. Use double quotation marks around the
query and single quotation marks around anything embedded in the query. queryout must also be specified
when bulk copying data from a query.
The query can reference a stored procedure as long as all tables referenced inside the stored procedure exist prior
to executing the bcp statement. For example, if the stored procedure generates a temp table, the bcp statement
fails because the temp table is available only at run time and not at statement execution time. In this case, consider
inserting the results of the stored procedure into a table and then use bcp to copy the data from the table into a
data file.
table_name
Is the name of the destination table when importing data into SQL Server (in), and the source table when
exporting data from SQL Server (out).
view_name
Is the name of the destination view when copying data into SQL Server (in), and the source view when copying
data from SQL Server (out). Only views in which all columns refer to the same table can be used as destination
views. For more information on the restrictions for copying data into views, see INSERT (Transact-SQL ).
-a packet_size
Specifies the number of bytes, per network packet, sent to and from the server. A server configuration option can
be set by using SQL Server Management Studio (or the sp_configure system stored procedure). However, the
server configuration option can be overridden on an individual basis by using this option. packet_size can be from
4096 to 65535 bytes; the default is 4096.
Increased packet size can enhance performance of bulk-copy operations. If a larger packet is requested but cannot
be granted, the default is used. The performance statistics generated by the bcp utility show the packet size used.
-b batch_size
Specifies the number of rows per batch of imported data. Each batch is imported and logged as a separate
transaction that imports the whole batch before being committed. By default, all the rows in the data file are
imported as one batch. To distribute the rows among multiple batches, specify a batch_size that is smaller than the
number of rows in the data file. If the transaction for any batch fails, only insertions from the current batch are
rolled back. Batches already imported by committed transactions are unaffected by a later failure.
Do not use this option in conjunction with the -h "ROWS_PER_BATCH =bb" option.
-c
Performs the operation using a character data type. This option does not prompt for each field; it uses char as the
storage type, without prefixes and with \t (tab character) as the field separator and \r\n (newline character) as the
row terminator. -c is not compatible with -w.
For more information, see Use Character Format to Import or Export Data (SQL Server).
-C { ACP | OEM | RAW | code_page }
Specifies the code page of the data in the data file. code_page is relevant only if the data contains char, varchar, or
text columns with character values greater than 127 or less than 32.
NOTE
We recommend specifying a collation name for each column in a format file, except when you want the 65001 option to
have priority over the collation/code page specification.
OEM Default code page used by the client. This is the default code
page used if -C is not specified.
-d database_name
Specifies the database to connect to. By default, bcp.exe connects to the user’s default database. If -d
database_name and a three part name (database_name.schema.table, passed as the first parameter to bcp.exe) is
specified, an error will occur because you cannot specify the database name twice.If database_name begins with a
hyphen (-) or a forward slash (/), do not add a space between -d and the database name.
-e err_file
Specifies the full path of an error file used to store any rows that the bcp utility cannot transfer from the file to the
database. Error messages from the bcp command go to the workstation of the user. If this option is not used, an
error file is not created.
If err_file begins with a hyphen (-) or a forward slash (/), do not include a space between -e and the err_file value.
-E
Specifies that identity value or values in the imported data file are to be used for the identity column. If -E is not
given, the identity values for this column in the data file being imported are ignored, and SQL Server
automatically assigns unique values based on the seed and increment values specified during table creation.
If the data file does not contain values for the identity column in the table or view, use a format file to specify that
the identity column in the table or view should be skipped when importing data; SQL Server automatically assigns
unique values for the column. For more information, see DBCC CHECKIDENT (Transact-SQL ).
The -E option has a special permissions requirement. For more information, see "Remarks" later in this topic.
-f format_file
Specifies the full path of a format file. The meaning of this option depends on the environment in which it is used,
as follows:
If -f is used with the format option, the specified format_file is created for the specified table or view. To
create an XML format file, also specify the -x option. For more information, see Create a Format File (SQL
Server).
If used with the in or out option, -f requires an existing format file.
NOTE
Using a format file in with the in or out option is optional. In the absence of the -f option, if -n, -c, -w, or -N is not
specified, the command prompts for format information and lets you save your responses in a format file (whose
default file name is Bcp.fmt).
If format_file begins with a hyphen (-) or a forward slash (/), do not include a space between -f and the
format_file value.
-F first_row
Specifies the number of the first row to export from a table or import from a data file. This parameter requires a
value greater than (>) 0 but less than (<) or equal to (=) the total number rows. In the absence of this parameter,
the default is the first row of the file.
first_row can be a positive integer with a value up to 2^63-1. -F first_row is 1-based.
-G
This switch is used by the client when connecting to Azure SQL Database or Azure SQL Data Warehouse to
specify that the user be authenticated using Azure Active Directory authentication. The -G switch requires version
14.0.3008.27 or later. To determine your version, execute bcp -v. For more information, see Use Azure Active
Directory Authentication for authentication with SQL Database or SQL Data Warehouse.
TIP
To check if your version of bcp includes support for Azure Active Directory Authentication (AAD) type bcp -- (bcp<space>
<dash><dash>) and verify that you see -G in the list of available arguments.
The following example imports data using Azure AD Username and Password where user and password is
an AAD credential. The example imports data from file c:\last\data1.dat into table bcptest for database
testdb on Azure server aadserver.database.windows.net using Azure AD User/Password:
The following example imports data using Azure AD Integrated auth. The example imports data from file
c:\last\data2.txt into table bcptest for database testdb on Azure server
aadserver.database.windows.net using Azure AD Integrated auth:
-h "load hints[ ,... n]" Specifies the hint or hints to be used during a bulk import of data into a table or view.
ORDER(column [ASC | DESC ] [,...n ])**
The sort order of the data in the data file. Bulk import performance is improved if the data being imported
is sorted according to the clustered index on the table, if any. If the data file is sorted in a different order, that
is other than the order of a clustered index key, or if there is no clustered index on the table, the ORDER
clause is ignored. The column names supplied must be valid column names in the destination table. By
default, bcp assumes the data file is unordered. For optimized bulk import, SQL Server also validates that
the imported data is sorted.
ROWS_PER_BATCH = bb
Number of rows of data per batch (as bb). Used when -b is not specified, resulting in the entire data file
being sent to the server as a single transaction. The server optimizes the bulk load according to the value
bb. By default, ROWS_PER_BATCH is unknown.
KILOBYTES_PER_BATCH = cc
Approximate number of kilobytes of data per batch (as cc). By default, KILOBYTES_PER_BATCH is
unknown.
TABLOCK
Specifies that a bulk update table-level lock is acquired for the duration of the bulk load operation;
otherwise, a row -level lock is acquired. This hint significantly improves performance because holding a lock
for the duration of the bulk-copy operation reduces lock contention on the table. A table can be loaded
concurrently by multiple clients if the table has no indexes and TABLOCK is specified. By default, locking
behavior is determined by the table option table lock on bulk load.
NOTE
If the target table is clustered columnstore index, TABLOCK hint is not required for loading by multiple concurrent
clients because each concurrent thread is assigned a separate rowgroup within the index and loads data into it.
Please refer to columnstore index conceptual topics for details,
CHECK_CONSTRAINTS
Specifies that all constraints on the target table or view must be checked during the bulk-import operation.
Without the CHECK_CONSTRAINTS hint, any CHECK and FOREIGN KEY constraints are ignored, and
after the operation the constraint on the table is marked as not-trusted.
NOTE
UNIQUE, PRIMARY KEY, and NOT NULL constraints are always enforced.
At some point, you will need to check the constraints on the entire table. If the table was nonempty before
the bulk import operation, the cost of revalidating the constraint may exceed the cost of applying CHECK
constraints to the incremental data. Therefore, we recommend that normally you enable constraint checking
during an incremental bulk import.
A situation in which you might want constraints disabled (the default behavior) is if the input data contains
rows that violate constraints. With CHECK constraints disabled, you can import the data and then use
Transact-SQL statements to remove data that is not valid.
NOTE
bcp now enforces data validation and data checks that might cause scripts to fail if they are executed on invalid data
in a data file.
NOTE
The -m max_errors switch does not apply to constraint checking.
FIRE_TRIGGERS
Specified with the in argument, any insert triggers defined on the destination table will run during the bulk-
copy operation. If FIRE_TRIGGERS is not specified, no insert triggers will run. FIRE_TRIGGERS is ignored
for the out, queryout, and format arguments.
-i input_file
Specifies the name of a response file, containing the responses to the command prompt questions for each
data field when a bulk copy is being performed using interactive mode (-n, -c, -w, or -N not specified).
If input_file begins with a hyphen (-) or a forward slash (/), do not include a space between -i and the
input_file value.
-k
Specifies that empty columns should retain a null value during the operation, rather than have any default
values for the columns inserted. For more information, see Keep Nulls or Use Default Values During Bulk
Import (SQL Server).
-K application_intent
Declares the application workload type when connecting to a server. The only value that is possible is
ReadOnly. If -K is not specified, the bcp utility will not support connectivity to a secondary replica in an
Always On availability group. For more information, see Active Secondaries: Readable Secondary Replicas
(Always On Availability Groups).
-L last_row
Specifies the number of the last row to export from a table or import from a data file. This parameter
requires a value greater than (>) 0 but less than (<) or equal to (=) the number of the last row. In the
absence of this parameter, the default is the last row of the file.
last_row can be a positive integer with a value up to 2^63-1.
-m max_errors
Specifies the maximum number of syntax errors that can occur before the bcp operation is canceled. A syntax
error implies a data conversion error to the target data type. The max_errors total excludes any errors that can be
detected only at the server, such as constraint violations.
A row that cannot be copied by the bcp utility is ignored and is counted as one error. If this option is not included,
the default is 10.
NOTE
The -m option also does not apply to converting the money or bigint data types.
-n
Performs the bulk-copy operation using the native (database) data types of the data. This option does not prompt
for each field; it uses the native values.
For more information, see Use Native Format to Import or Export Data (SQL Server).
-N
Performs the bulk-copy operation using the native (database) data types of the data for noncharacter data, and
Unicode characters for character data. This option offers a higher performance alternative to the -w option, and is
intended for transferring data from one instance of SQL Server to another using a data file. It does not prompt for
each field. Use this option when you are transferring data that contains ANSI extended characters and you want to
take advantage of the performance of native mode.
For more information, see Use Unicode Native Format to Import or Export Data (SQL Server).
If you export and then import data to the same table schema by using bcp.exe with -N, you might see a truncation
warning if there is a fixed length, non-Unicode character column (for example, char(10)).
The warning can be ignored. One way to resolve this warning is to use -n instead of -N.
-o output_file
Specifies the name of a file that receives output redirected from the command prompt.
If output_file begins with a hyphen (-) or a forward slash (/), do not include a space between -o and the output_file
value.
-P password
Specifies the password for the login ID. If this option is not used, the bcp command prompts for a password. If this
option is used at the end of the command prompt without a password, bcp uses the default password (NULL ).
IMPORTANT
Do not use a blank password. Use a strong password.
To mask your password, do not specify the -P option along with the -U option. Instead, after specifying bcp along
with the -U option and other switches (do not specify -P ), press ENTER, and the command will prompt you for a
password. This method ensures that your password will be masked when it is entered.
If password begins with a hyphen (-) or a forward slash (/), do not add a space between -P and the password value.
-q
Executes the SET QUOTED_IDENTIFIERS ON statement in the connection between the bcp utility and an
instance of SQL Server. Use this option to specify a database, owner, table, or view name that contains a space or a
single quotation mark. Enclose the entire three-part table or view name in quotation marks ("").
To specify a database name that contains a space or single quotation mark, you must use the –q option.
-q does not apply to values passed to -d.
For more information, see Remarks, later in this topic.
-r row_term
Specifies the row terminator. The default is \n (newline character). Use this parameter to override the default row
terminator. For more information, see Specify Field and Row Terminators (SQL Server).
If you specify the row terminator in hexadecimal notation in a bcp.exe command, the value will be truncated at
0x00. For example, if you specify 0x410041, 0x41 will be used.
If row_term begins with a hyphen (-) or a forward slash (/), do not include a space between -r and the row_term
value.
-R
Specifies that currency, date, and time data is bulk copied into SQL Server using the regional format defined for
the locale setting of the client computer. By default, regional settings are ignored.
-S server_name [\instance_name] Specifies the instance of SQL Server to which to connect. If no server is
specified, the bcp utility connects to the default instance of SQL Server on the local computer. This option is
required when a bcp command is run from a remote computer on the network or a local named instance. To
connect to the default instance of SQL Server on a server, specify only server_name. To connect to a named
instance of SQL Server, specify server_name\instance_name.
-t field_term
Specifies the field terminator. The default is \t (tab character). Use this parameter to override the default field
terminator. For more information, see Specify Field and Row Terminators (SQL Server).
If you specify the field terminator in hexadecimal notation in a bcp.exe command, the value will be truncated at
0x00. For example, if you specify 0x410041, 0x41 will be used.
If field_term begins with a hyphen (-) or a forward slash (/), do not include a space between -t and the field_term
value.
-T
Specifies that the bcp utility connects to SQL Server with a trusted connection using integrated security. The
security credentials of the network user, login_id, and password are not required. If –T is not specified, you need to
specify –U and –P to successfully log in.
IMPORTANT
When the bcp utility is connecting to SQL Server with a trusted connection using integrated security, use the -T option
(trusted connection) instead of the user name and password combination. When the bcp utility is connecting to SQL
Database or SQL Data Warehouse, using Windows authentication or Azure Active Directory authentication is not supported.
Use the -U and -P options.
-U login_id
Specifies the login ID used to connect to SQL Server.
IMPORTANT
When the bcp utility is connecting to SQL Server with a trusted connection using integrated security, use the -T option
(trusted connection) instead of the user name and password combination. When the bcp utility is connecting to SQL
Database or SQL Data Warehouse, using Windows authentication or Azure Active Directory authentication is not supported.
Use the -U and -P options.
-v
Reports the bcp utility version number and copyright.
-V (80 | 90 | 100 | 110 | 120 | 130 )
Performs the bulk-copy operation using data types from an earlier version of SQL Server. This option does not
prompt for each field; it uses the default values.
80 = SQL Server 2000 (8.x)
90 = SQL Server 2005
100 = SQL Server 2008 and SQL Server 2008 R2
110 = SQL Server 2012 (11.x)
120 = SQL Server 2014 (12.x)
130 = SQL Server 2016 (13.x)
For example, to generate data for types not supported by SQL Server 2000 (8.x), but were introduced in later
versions of SQL Server, use the -V80 option.
For more information, see Import Native and Character Format Data from Earlier Versions of SQL Server.
-w
Performs the bulk copy operation using Unicode characters. This option does not prompt for each field; it uses
nchar as the storage type, no prefixes, \t (tab character) as the field separator, and \n (newline character) as the
row terminator. -w is not compatible with -c.
For more information, see Use Unicode Character Format to Import or Export Data (SQL Server).
-x
Used with the format and -f format_file options, generates an XML -based format file instead of the default non-
XML format file. The -x does not work when importing or exporting data. It generates an error if used without
both format and -f format_file.
Remarks
The bcp 13.0 client is installed when you install Microsoft SQL Server 2017 tools. If tools are installed for both
SQL Server 2017 and an earlier version of SQL Server, depending on the order of values of the PATH
environment variable, you might be using the earlier bcp client instead of the bcp 13.0 client. This environment
variable defines the set of directories used by Windows to search for executable files. To discover which version
you are using, run the bcp /v command at the Windows Command Prompt. For information about how to set the
command path in the PATH environment variable, see Windows Help.
The bcp utility can also be downloaded separately from the Microsoft SQL Server 2016 Feature Pack. Select either
ENU\x64\MsSqlCmdLnUtils.msi or ENU\x86\MsSqlCmdLnUtils.msi .
XML format files are only supported when SQL Server tools are installed together with SQL Server Native Client.
For information about where to find or how to run the bcp utility and about the command prompt utilities syntax
conventions, see Command Prompt Utility Reference (Database Engine).
For information on preparing data for bulk import or export operations, see Prepare Data for Bulk Export or
Import (SQL Server).
For information about when row -insert operations that are performed by bulk import are logged in the
transaction log, see Prerequisites for Minimal Logging in Bulk Import.
To specify a database name that contains a space or quotation mark, you must use the -q option.
For owner, table, or view names that contain embedded spaces or quotation marks, you can either:
Specify the -q option, or
Enclose the owner, table, or view name in brackets ([]) inside the quotation marks.
Data Validation
bcp now enforces data validation and data checks that might cause scripts to fail if they are executed on invalid
data in a data file. For example, bcp now verifies that:
The native representation of float or real data types are valid.
Unicode data has an even-byte length.
Forms of invalid data that could be bulk imported in earlier versions of SQL Server might fail to load now;
whereas, in earlier versions, the failure did not occur until a client tried to access the invalid data. The added
validation minimizes surprises when querying the data after bulk load.
SQLCHAR or SQLVARYCHAR The data is sent in the client code page or in the code page
implied by the collation). The effect is the same as specifying
the -c switch without specifying a format file.
SQLNCHAR or SQLNVARCHAR The data is sent as Unicode. The effect is the same as
specifying the -w switch without specifying a format file.
Permissions
A bcp out operation requires SELECT permission on the source table.
A bcp in operation minimally requires SELECT/INSERT permissions on the target table. In addition, ALTER
TABLE permission is required if any of the following is true:
Constraints exist and the CHECK_CONSTRAINTS hint is not specified.
NOTE
Disabling constraints is the default behavior. To enable constraints explicitly, use the -h option with the
CHECK_CONSTRAINTS hint.
NOTE
By default, triggers are not fired. To fire triggers explicitly, use the -h option with the FIRE_TRIGGERS hint.
You use the -E option to import identity values from a data file.
NOTE
Requiring ALTER TABLE permission on the target table was new in SQL Server 2005. This new requirement might cause bcp
scripts that do not enforce triggers and constraint checks to fail if the user account lacks ALTER table permissions for the
target table.
Examples
This section contains the following examples:
A. Identify bcp utility version
B. Copying table rows into a data file (with a trusted connection)
C. Copying table rows into a data file (with Mixed-mode Authentication)
D. Copying data from a file to a table
E. Copying a specific column into a data file
F. Copying a specific row into a data file
G. Copying data from a query to a data file
H. Creating format files
I. Using a format file to bulk import with bcp
Example Test Conditions
The examples below make use of the WideWorldImporters sample database for SQL Server (starting 2016) and
Azure SQL Database. WideWorldImporters can be downloaded from https://github.com/Microsoft/sql-server-
samples/releases/tag/wide-world-importers-v1.0. See RESTORE (Transact-SQL ) for the syntax to restore the
sample database. Except where specified otherwise, the examples assume that you are using Windows
Authentication and have a trusted connection to the server instance on which you are running the bcp command.
A directory named D:\BCP will be used in many of the examples.
The script below creates an empty copy of the WideWorldImporters.Warehouse.StockItemTransactions table and then
adds a primary key constraint. Run the following T-SQL script in SQL Server Management Studio (SSMS )
USE WideWorldImporters;
GO
NOTE
Truncate the StockItemTransactions_bcp table as needed.
TRUNCATE TABLE WideWorldImporters.Warehouse.StockItemTransactions_bcp;
bcp -v
Expanded
This example creates a data file named StockItemTransactions_native.bcp and copies the table data
into it using the native format. The example also: specifies the maximum number of syntax errors, an
error file, and an output file.
At a command prompt, enter the following command:
Review Error_out.log and Output_out.log . should be blank. Compare the file sizes between
Error_out.log
StockItemTransactions_character.bcp and StockItemTransactions_native.bcp .
C. Copying table rows into a data file (with mixed-mode authentication)
The following example illustrates the out option on the WideWorldImporters.Warehouse.StockItemTransactions table.
This example creates a data file named StockItemTransactions_character.bcp and copies the table data into it using
character format.
The example assumes that you are using mixed-mode authentication, you must use the -U switch to specify your
login ID. Also, unless you are connecting to the default instance of SQL Server on the local computer, use the -S
switch to specify the system name and, optionally, an instance name.
At a command prompt, enter the following command: (The system will prompt you for your password.)
bcp WideWorldImporters.Warehouse.StockItemTransactions_bcp IN
D:\BCP\StockItemTransactions_character.bcp -c -T
Expanded
This example uses the StockItemTransactions_native.bcp data file previously created. The example also: use
the hint TABLOCK, specifies the batch size, the maximum number of syntax errors, an error file, and an
output file.
At a command prompt, enter the following command:
bcp "SELECT * from Application.People WHERE FullName = 'Amy Trefl'" queryout D:\BCP\Amy_Trefl_c.bcp -d
WideWorldImporters -c -T
NOTE
To use the -x switch, you must be using a bcp 9.0 client. For information about how to use the bcp 9.0 client, see "Remarks."
For more information, see Non-XML Format Files (SQL Server) and XML Format Files (SQL Server).
I. Using a format file to bulk import with bcp
To use a previously created format file when importing data into an instance of SQL Server, use the -f switch with
the in option. For example, the following command bulk copies the contents of a data file,
StockItemTransactions_character.bcp , into a copy of the Warehouse.StockItemTransactions_bcp table by using the
previously created format file, StockItemTransactions_c.xml . Note: the -L switch is used to import only the first 100
records.
At a command prompt, enter the following command:
NOTE
Format files are useful when the data file fields are different from the table columns; for example, in their number, ordering,
or data types. For more information, see Format Files for Importing or Exporting Data (SQL Server).
The following partial code example shows bcp export while specifying a code page 65001:
Additional Examples
THE FOLLOWING TOPICS CONTAIN EXAMPLES OF USING BCP:
Keep Nulls or Use Default Values During Bulk Import (SQL Server)
See Also
Prepare Data for Bulk Export or Import (SQL Server)
BULK INSERT (Transact-SQL )
OPENROWSET (Transact-SQL )
SET QUOTED_IDENTIFIER (Transact-SQL )
sp_configure (Transact-SQL )
sp_tableoption (Transact-SQL )
Format Files for Importing or Exporting Data (SQL Server)
SqlLocalDB Utility
5/3/2018 • 3 minutes to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Use the SqlLocalDB utility to create an instance of Microsoft SQL Server 2016 ExpressLocalDB. The
SqlLocalDB utility (SqlLocalDB.exe) is a simple command line tool to enable users and developers to create and
manage an instance of SQL Server ExpressLocalDB. For information about how to use LocalDB, see SQL Server
2016 Express LocalDB.
Syntax
SqlLocalDB.exe
{
[ create | c ] \<instance-name> \<instance-version> [-s ]
| [ delete | d ] \<instance-name>
| [ start | s ] \<instance-name>
| [ stop | p ] \<instance-name> [ -i ] [ -k ]
| [ share | h ] [" <user_SID> " | " <user_account> " ] " \<private-name> " " \<shared-name> "
| [ unshare | u ] " \<shared-name> "
| [ info | i ] \<instance-name>
| [ versions | v ]
| [ trace | t ] [ on | off ]
| [ help | -? ]
}
Arguments
[ create | c ] <instance-name> <instance-version> [-s ]
Creates a new of instance of SQL Server ExpressLocalDB. SqlLocalDB uses the version of SQL Server Express
binaries specified by <instance-version> argument. The version number is specified in numeric format with at least
one decimal. The minor version numbers (service packs) are optional. For example the following two version
numbers are both acceptable: 11.0, or 11.0.1186. The specified version must be installed on the computer. If not
specified, the version number defaults to the version of the SqlLocalDB utility. Adding –s starts the new instance
of LocalDB.
[ share | h ]
Shares the specified private instance of LocalDB using the specified shared name. If the user SID or account name
is omitted, it defaults to the current user.
[ unshared | u ]
Stops the sharing of the specified shared instance of LocalDB.
[ delete | d ] <instance-name>
Deletes the specified instance of SQL Server ExpressLocalDB.
[ start | s ] "<instance-name>"
Starts the specified instance of SQL Server ExpressLocalDB. When successful the statement returns the named
pipe address of the LocalDB.
[ stop | p ] <instance-name> [-i ] [-k ]
Stops the specified instance of SQL Server ExpressLocalDB. Adding –i requests the instance shutdown with the
NOWAIT option. Adding –k kills the instance process without contacting it.
[ info | i ] [ <instance-name> ]
Lists all instance of SQL Server ExpressLocalDB owned by the current user.
<instance-name> returns the name, version, state (Running or Stopped), last start time for the specified instance
of SQL Server ExpressLocalDB, and the local pipe name of the LocalDB.
[ trace | t ] on | off
trace on enables tracing for the SqlLocalDB API calls for the current user. trace off disables tracing.
-?
Returns brief descriptions of each SqlLocalDB option.
Remarks
The instance name argument must follow the rules for SQL Server identifiers or it must be enclosed in double
quotes.
Executing SqlLocalDB without arguments returns the help text.
Operations other than start can only be performed on an instance belonging to currently logged in user. An
SQLLOCALDB Instance, when shared, can only be started and stopped by the owner of the instance.
Examples
A. Creating an Instance of LocalDB
The following example creates an instance of SQL Server ExpressLocalDB named DEPARTMENT using the SQL
Server 2017 binaries and starts the instance.
Execute the following code to connect to the shared instance of LocalDB using the NewLogin login.
See Also
SQL Server 2016 Express LocalDB
Command-Line Management Tool: SqlLocalDB.exe
osql Utility
5/3/2018 • 11 minutes to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
The osql utility allows you to enter Transact-SQL statements, system procedures, and script files. This utility uses
ODBC to communicate with the server.
IMPORTANT
This feature will be removed in a future version of SQL Server. Avoid using this feature in new development work, and plan to
modify applications that currently use the feature. Use sqlcmd instead. For more information, see sqlcmd Utility.
Syntax
osql
[-?] |
[-L] |
[
{
{-Ulogin_id [-Ppassword]} | –E }
[-Sserver_name[\instance_name]] [-Hwksta_name] [-ddb_name]
[-ltime_out] [-ttime_out] [-hheaders]
[-scol_separator] [-wcolumn_width] [-apacket_size]
[-e] [-I] [-D data_source_name]
[-ccmd_end] [-q "query"] [-Q"query"]
[-n] [-merror_level] [-r {0 | 1}]
[-iinput_file] [-ooutput_file] [-p]
[-b] [-u] [-R] [-O]
]
Arguments
-?
Displays the syntax summary of osql switches.
-L
Lists the locally configured servers and the names of the servers broadcasting on the network.
NOTE
Due to the nature of broadcasting on networks, osql may not receive a timely response from all servers. Therefore the list of
servers returned may vary for each invocation of this option.
-U login_id
Is the user login ID. Login IDs are case-sensitive.
-P password
Is a user-specified password. If the -P option is not used, osql prompts for a password. If the -P option is used at
the end of the command prompt without any password, osql uses the default password (NULL ).
IMPORTANT
Do not use a blank password. Use a strong password. For more information, see Strong Passwords.
C:\>SET OSQLPASSWORD=abracadabra
C:\>osql
IMPORTANT
To mask your password, do not specify the -P option along with the -U option. Instead, after specifying osql along with the -
U option and other switches (do not specify -P), press ENTER, and osql will prompt you for a password. This method ensures
that your password will be masked when it is entered.
-E
Uses a trusted connection instead of requesting a password.
-S server_name[ \instance_name]
Specifies the instance of SQL Server to connect to. Specify server_name to connect to the default instance of SQL
Server on that server. Specify server_name\instance_name to connect to a named instance of SQL Server on that
server. If no server is specified, osql connects to the default instance of SQL Server on the local computer. This
option is required when executing osql from a remote computer on the network.
-H wksta_name
Is a workstation name. The workstation name is stored in sysprocesses.hostname and is displayed by sp_who. If
this option is not specified, the current computer name is assumed.
-d db_name
Issues a USE db_name statement when osqlis started.
-l time_out
Specifies the number of seconds before an osql login times out. The default time-out for login to osql is eight
seconds.
-t time_out
Specifies the number of seconds before a command times out. If a time_out value is not specified, commands do
not time out.
-h headers
Specifies the number of rows to print between column headings. The default is to print headings one time for each
set of query results. Use -1 to specify that no headers will be printed. If –1 is used, there must be no space between
the parameter and the setting (-h-1, not -h -1).
-s col_separator
Specifies the column-separator character, which is a blank space by default. To use characters that have special
meaning to the operating system (for example, | ; & < >), enclose the character in double quotation marks (").
-w column_width
Allows the user to set the screen width for output. The default is 80 characters. When an output line has reached its
maximum screen width, it is broken into multiple lines.
-a packet_size
Allows you to request a different-sized packet. The valid values for packet_size are 512 through 65535. The default
value osql is the server default. Increased packet size can enhance performance on larger script execution where
the amount of SQL statements between GO commands is substantial. Microsoft testing indicates that 8192 is
typically the fastest setting for bulk copy operations. A larger packet size can be requested, but osql defaults to the
server default if the request cannot be granted.
-e
Echoes input.
-I
Sets the QUOTED_IDENTIFIER connection option on.
-D data_source_name
Connects to an ODBC data source that is defined using the ODBC driver for SQL Server. The osql connection
uses the options specified in the data source.
NOTE
This option does not work with data sources defined for other drivers.
-c cmd_end
Specifies the command terminator. By default, commands are terminated and sent to SQL Server by entering GO
on a line by itself. When you reset the command terminator, do not use Transact-SQL reserved words or characters
that have special meaning to the operating system, whether preceded by a backslash or not.
-q " query "
Executes a query when osql starts, but does not exit osql when the query completes. (Note that the query
statement should not include GO ). If you issue a query from a batch file, use %variables, or environment
%variables%. For example:
SET table=sys.objects
osql -E -q "select name, object_id from %table%"
Use double quotation marks around the query and single quotation marks around anything embedded in the
query.
-Q" query "
Executes a query and immediately exits osql. Use double quotation marks around the query and single quotation
marks around anything embedded in the query.
-n
Removes numbering and the prompt symbol (>) from input lines.
-m error_level
Customizes the display of error messages. The message number, state, and error level are displayed for errors of
the specified severity level or higher. Nothing is displayed for errors of levels lower than the specified level. Use -1
to specify that all headers are returned with messages, even informational messages. If using -1, there must be no
space between the parameter and the setting (-m -1, not -m -1).
-r { 0| 1}
Redirects message output to the screen (stderr). If you do not specify a parameter, or if you specify 0, only error
messages with a severity level 11 or higher are redirected. If you specify 1, all message output (including "print") is
redirected.
-i input_file
Identifies the file that contains a batch of SQL statements or stored procedures. The less than (<) comparison
operator can be used in place of -i.
-o output_file
Identifies the file that receives output from osql. The greater than (>) comparison operator can be used in place of
-o.
If input_file is not Unicode and -u is not specified, output_file is stored in OEM format. If input_file is Unicode or -u
is specified, output_file is stored in Unicode format.
-p
Prints performance statistics.
-b
Specifies that osql exits and returns a DOS ERRORLEVEL value when an error occurs. The value returned to the
DOS ERRORLEVEL variable is 1 when the SQL Server error message has a severity of 11 or greater; otherwise,
the value returned is 0. Microsoft MS -DOS batch files can test the value of DOS ERRORLEVEL and handle the
error appropriately.
-u
Specifies that output_file is stored in Unicode format, regardless of the format of the input_file.
-R
Specifies that the SQL Server ODBC driver use client settings when converting currency, date, and time data to
character data.
-O
Specifies that certain osql features be deactivated to match the behavior of earlier versions of isql. These features
are deactivated:
EOF batch processing
Automatic console width scaling
Wide messages
It also sets the default DOS ERRORLEVEL value to -1.
NOTE
The -n, -O and -D options are no longer supported by osql.
Remarks
The osql utility is started directly from the operating system with the case-sensitive options listed here. After
osqlstarts, it accepts SQL statements and sends them to SQL Server interactively. The results are formatted and
displayed on the screen (stdout). Use QUIT or EXIT to exit from osql.
If you do not specify a user name when you start osql, SQL Server checks for the environment variables and uses
those, for example, osqluser=(user) or osqlserver=(server). If no environment variables are set, the workstation
user name is used. If you do not specify a server, the name of the workstation is used.
If neither the -U or -P options are used, SQL Server attempts to connect using Microsoft Windows Authentication
Mode. Authentication is based on the Microsoft Windows account of the user running osql.
The osql utility uses the ODBC API. The utility uses the SQL Server ODBC driver default settings for the SQL
Server ISO connection options. For more information, see Effects of ANSI Options.
NOTE
The osql utility does not support CLR user-defined data types. To process these data types, you must use the sqlcmd utility.
For more information, see sqlcmd Utility.
OSQL Commands
In addition to Transact-SQL statements within osql, these commands are also available.
COMMAND DESCRIPTION
NOTE
The !! and ED commands are no longer supported by osql.
The command terminators GO (by default), RESET EXIT, QUIT, and CTRL+C, are recognized only if they appear at
the beginning of a line, immediately following the osql prompt.
GO signals both the end of a batch and the execution of any cached Transact-SQL statements. When you press
ENTER at the end of each input line, osql caches the statements on that line. When you press ENTER after typing
GO, all of the currently cached statements are sent as a batch to SQL Server.
The current osql utility works as if there is an implied GO at the end of any script executed, therefore all
statements in the script execute.
End a command by typing a line beginning with a command terminator. You can follow the command terminator
with an integer to specify how many times the command should be run. For example, to execute this command
100 times, type:
SELECT x = 1
GO 100
The results are printed once at the end of execution. osql does not accept more than 1,000 characters per line.
Large statements should be spread across multiple lines.
The command recall facilities of Windows can be used to recall and modify osql statements. The existing query
buffer can be cleared by typing RESET.
When running stored procedures, osql prints a blank line between each set of results in a batch. In addition, the "0
rows affected" message does not appear when it does not apply to the statement executed.
osql -E -i stores.qry
You can read in a file containing a query (such as Titles.qry) and direct the results to another file by typing a
command similar to this:
IMPORTANT
When possible, use the -Eoption (trusted connection).
When using osql interactively, you can read an operating-system file into the command buffer with :rfile_name.
This sends the SQL script in file_name directly to the server as a single batch.
NOTE
When using osql, SQL Server treats the batch separator GO, if it appears in a SQL script file, as a syntax error.
Inserting Comments
You can include comments in a Transact-SQL statement submitted to SQL Server by osql. Two types of
commenting styles are allowed: -- and /*...*/.
For example:
EXIT(SELECT @@ROWCOUNT)
You can also include the EXIT parameter as part of a batch file. For example:
The osql utility passes everything between the parentheses () to the server exactly as entered. If a stored system
procedure selects a set and returns a value, only the selection is returned. The EXIT() statement with nothing
between the parentheses executes everything preceding it in the batch and then exits with no return value.
There are four EXIT formats:
EXIT
NOTE
Does not execute the batch; quits immediately and returns no value.
EXIT()
NOTE
Executes the batch, and then quits and returns no value.
EXIT(query)
NOTE
Executes the batch, including the query, and then quits after returning the results of the query.
NOTE
If RAISERROR is used within an osql script and a state of 127 is raised, osql will quit and return the message ID back to the
client. For example:
This error will cause the osql script to end and the message ID 50001 will be returned to the client.
The return values -1 to -99 are reserved by SQL Server; osql defines these values:
-100
Error encountered prior to selecting return value.
-101
No rows found when selecting return value.
-102
Conversion error occurred when selecting return value.
This statement produces a result of 10.3496 , which indicates that the value is stored with all decimal places intact.
See Also
Comment (MDX)
-- (Comment) (MDX)
CAST and CONVERT (Transact-SQL )
RAISERROR (Transact-SQL )
Profiler Utility
5/3/2018 • 3 minutes to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
The profiler utility launches the SQL Server Profiler tool. The optional arguments listed later in this topic allow
you to control how the application starts.
NOTE
The profiler utility is not intended for scripting traces. For more information, see SQL Server Profiler.
Syntax
profiler
[ /? ] |
[
{
{ /U login_id [ /P password ] }
| /E
}
{[ /S sql_server_name ] | [ /A analysis_services_server_name ] }
[ /D database ]
[ /T "template_name" ]
[ /B { "trace_table_name" } ]
{ [/F "filename" ] | [ /O "filename" ] }
[ /L locale_ID ]
[ /M "MM-DD-YY hh:mm:ss" ]
[ /R ]
[ /Z file_size ]
]
Arguments
/?
Displays the syntax summary of profiler arguments.
/U login_id
Is the user login ID for SQL Server Authentication. Login IDs are case sensitive.
NOTE
When possible, use Windows Authentication..
/P password
Specifies a user-specified password for SQL Server Authentication.
/E
Specifies connecting with Windows Authentication with the current user's credentials.
/S sql_server_name
Specifies an instance of SQL Server. Profiler will automatically connect to the specified server using the
authentication information specified in the /U and /P switches or the /E switch. To connect to a named instance of
SQL Server, use /S sql_server_name\instance_name.
/A analysis_services_server_name
Specifies an instance of Analysis Services. Profiler will automatically connect to the specified server using the
authentication information specified in the /U and /P switches or the /E switch. To connect to a named instance of
SQL Server use /A analysis_services_server_name\instance_name.
/D database
Specifies the name of the database to be used with the connection. This option will select the default database for
the specified user if no database is specified.
/B " trace_table_name "
Specifies a trace table to load when the profiler is launched. You must specify the database, the user or schema, and
the table.
/T" template_name "
Specifies the template that will be loaded to configure the trace. The template name must be in quotes. The
template name must be in either the system template directory or the user template directory. If two templates
with the same name exist in both directories, the template from the system directory will be loaded. If no template
with the specified name exists, the standard template will be loaded. Note that the file extension for the template
(.tdf ) should not be specified as part of the template_name. For example:
/T "standard"
PARAMETER DEFINITION
MM Two-digit month
DD Two-digit day
YY Two-digit year
mm Two-digit minute
ss Two-digit second
NOTE
The "MM-DD-YY hh:mm:ss" format can only be used if the Use regional settings to display date and time values option
is enabled in SQL Server Profiler. If this option is not enabled, you must use the "YYYY-MM-DD hh:mm:ss" date and time
format.
/R
Enables trace file rollover.
/Z file_size
Specifies the size of the trace file in megabytes (MB ). The default size is 5 MB. If rollover is enabled, all rollover files
will be limited to the value specified in this argument.
Remarks
To start a trace with a specific template, use the /S and /T options together. For example, to start a trace using the
Standard template on MyServer\MyInstance, enter the following at the command prompt:
See Also
Command Prompt Utility Reference (Database Engine)
sqlagent90 Application
5/3/2018 • 2 minutes to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
The sqlagent90 application starts SQL Server Agent from the command prompt. Usually, SQL Server Agent
should be run from SQL Server Management Studio or by using SQL -SMO methods in an application. Only run
sqlagent90 from the command prompt when you are diagnosing SQL Server Agent, or when you are directed to
it by your primary support provider.
Syntax
sqlagent90
-c [-v] [-i instance_name]
Arguments
-c
Indicates that SQL Server Agent is running from the command prompt and is independent of the Microsoft
Windows Services Control Manager. When -c is used, SQL Server Agent cannot be controlled from either the
Services application in Administrative Tools or SQL Server Configuration Manager. This argument is mandatory.
-v
Indicates that SQL Server Agent runs in verbose mode and writes diagnostic information to the command-prompt
window. The diagnostic information is the same as the information written to the SQL Server Agent error log.
-i instance_name
Indicates that SQL Server Agent connects to the named SQL Server instance specified by instance_name.
Remarks
After displaying a copyright message, sqlagent90 displays output in the command prompt window only when the
-v switch is specified. To stop sqlagent90, press CTRL+C at the command prompt. Do not close the command-
prompt window before stopping sqlagent90.
See Also
Automated Administration Tasks (SQL Server Agent)
sqlcmd Utility
5/30/2018 • 32 minutes to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
The sqlcmd utility lets you enter Transact-SQL statements, system procedures, and script files at the command
prompt, in Query Editor in SQLCMD mode, in a Windows script file or in an operating system (Cmd.exe) job
step of a SQL Server Agent job. This utility uses ODBC to execute Transact-SQL batches.
NOTE
The most recent versions of the sqlcmd utility is available as a web release from the Download Center. You need version 13.1
or higher to support Always Encrypted ( -g ) and Azure Active Directory authentication ( -G ). (You may have several
versions of sqlcmd.exe installed on your computer. Be sure you are using the correct version. To determine the version,
execute sqlcmd -? .)
You can try the sqlcmd utility from Azure Cloud Shell as it is pre-installed by default:
To run sqlcmd statements in SSMS, select SQLCMD Mode from the top navigation Query Menu dropdown.
IMPORTANT
SQL Server Management Studio (SSMS) uses the Microsoft .NET Framework SqlClient for execution in regular and SQLCMD
mode in Query Editor. When sqlcmd is run from the command line, sqlcmd uses the ODBC driver. Because different
default options may apply, you might see different behavior when you execute the same query in SQL Server Management
Studio in SQLCMD Mode and in the sqlcmd utility.
Currently, sqlcmd does not require a space between the command line option and the value. However, in a future
release, a space may be required between the command line option and the value.
Other topics:
Start the sqlcmd Utility
Use the sqlcmd Utility
Syntax
sqlcmd
-a packet_size
-A (dedicated administrator connection)
-b (terminate batch job if there is an error)
-c batch_terminator
-C (trust the server certificate)
-d db_name
-e (echo input)
-E (use trusted connection)
-f codepage | i:codepage[,o:codepage] | o:codepage[,i:codepage]
-g (enable column encryption)
-G (use Azure Active Directory for authentication)
-h rows_per_header
-H workstation_name
-i input_file
-I (enable quoted identifiers)
-j (Print raw error messages)
-k[1 | 2] (remove or replace control characters)
-K application_intent
-l login_timeout
-L[c] (list servers, optional clean output)
-m error_level
-M multisubnet_failover
-N (encrypt connection)
-o output_file
-p[1] (print statistics, optional colon format)
-P password
-q "cmdline query"
-Q "cmdline query" (and exit)
-r[0 | 1] (msgs to stderr)
-R (use client regional settings)
-s col_separator
-S [protocol:]server[instance_name][,port]
-t query_timeout
-u (unicode output file)
-U login_id
-v var = "value"
-V error_severity_level
-w column_width
-W (remove trailing spaces)
-x (disable variable substitution)
-X[1] (disable commands, startup script, environment variables, optional exit)
-y variable_length_type_display_width
-Y fixed_length_type_display_width
-z new_password
-Z new_password (and exit)
-? (usage)
Command-line Options
Login-Related Options
-A
Logs in to SQL Server with a Dedicated Administrator Connection (DAC ). This kind of connection is used to
troubleshoot a server. This will only work with server computers that support DAC. If DAC is not available,
sqlcmd generates an error message and then exits. For more information about DAC, see Diagnostic Connection
for Database Administrators. The -A option is not supported with the -G option. When connecting to SQL
Database using -A, you must be a SQL server administrator. The DAC is not availble for an Azure Active
Directory adminstrator.
-C
This switch is used by the client to configure it to implicitly trust the server certificate without validation. This
option is equivalent to the ADO.NET option TRUSTSERVERCERTIFICATE = true .
-d db_name
Issues a USE db_name statement when you start sqlcmd. This option sets the sqlcmd scripting variable
SQLCMDDBNAME. This specifies the initial database. The default is your login's default-database property. If the
database does not exist, an error message is generated and sqlcmd exits.
-l login_timeout
Specifies the number of seconds before a sqlcmd login to the ODBC driver times out when you try to connect to
a server. This option sets the sqlcmd scripting variable SQLCMDLOGINTIMEOUT. The default time-out for login
to sqlcmd is eight seconds. When using the -G option to connect to SQL Database or SQL Data Warehouse and
authenticate using Azure Active Directory, a timeout value of at least 30 seconds is recommended. The login time-
out must be a number between 0 and 65534. If the value supplied is not numeric or does not fall into that range,
sqlcmd generates an error message. A value of 0 specifies time-out to be infinite.
-E
Uses a trusted connection instead of using a user name and password to log on to SQL Server . By default,
without -E specified, sqlcmd uses the trusted connection option.
The -E option ignores possible user name and password environment variable settings such as
SQLCMDPASSWORD. If the -E option is used together with the -U option or the -P option, an error message is
generated.
-g
Sets the Column Encryption Setting to Enabled . For more information, see Always Encrypted. Only master keys
stored in Windows Certificate Store are supported. The -g switch requires at least sqlcmd version 13.1. To
determine your version, execute sqlcmd -? .
-G
This switch is used by the client when connecting to SQL Database or SQL Data Warehouse to specify that the
user be authenticated using Azure Active Directory authentication. This option sets the sqlcmd scripting variable
SQLCMDUSEAAD = true. The -G switch requires at least sqlcmd version 13.1. To determine your version,
execute sqlcmd -? . For more information, see Connecting to SQL Database or SQL Data Warehouse By Using
Azure Active Directory Authentication. The -A option is not supported with the -G option.
IMPORTANT
The -G option only applies to Azure SQL Database and Azure Data Warehouse.
SERVER = Target_DB_or_DW.testsrv.database.windows.net;UID=
bob@contoso.com;PWD=MyAADPassword;AUTHENTICATION = ActiveDirectoryPassword
NOTE
The -E option (Trusted_Connection) cannot be used along with the -G option).
-H workstation_name
A workstation name. This option sets the sqlcmd scripting variable SQLCMDWORKSTATION. The workstation
name is listed in the hostname column of the sys.sysprocesses catalog view and can be returned using the
stored procedure sp_who. If this option is not specified, the default is the current computer name. This name can
be used to identify different sqlcmd sessions.
-j Prints raw error messages to the screen.
-K application_intent
Declares the application workload type when connecting to a server. The only currently supported value is
ReadOnly. If -K is not specified, the sqlcmd utility will not support connectivity to a secondary replica in an
Always On availability group. For more information, see Active Secondaries: Readable Secondary Replica (Always
On Availability Groups)
-M multisubnet_failover
Always specify -M when connecting to the availability group listener of a SQL Server availability group or a SQL
Server Failover Cluster Instance. -M provides for faster detection of and connection to the (currently) active
server. If –M is not specified, -M is off. For more information about [!INCLUDEssHADR, Creation and
Configuration of Availability Groups (SQL Server), Failover Clustering and Always On Availability Groups (SQL
Server), and Active Secondaries: Readable Secondary Replicas(Always On Availability Groups).
-N
This switch is used by the client to request an encrypted connection.
-P password
Is a user-specified password. Passwords are case sensitive. If the -U option is used and the -P option is not used,
and the SQLCMDPASSWORD environment variable has not been set, sqlcmd prompts the user for a password.
To specify a null password (not recommended) use -P "". And remember to always:
Use a strong password!!
The password prompt is displayed by printing the password prompt to the console, as follows: Password:
User input is hidden. This means that nothing is displayed and the cursor stays in position.
The SQLCMDPASSWORD environment variable lets you set a default password for the current session.
Therefore, passwords do not have to be hard-coded into batch files.
The following example first sets the SQLCMDPASSWORD variable at the command prompt and then accesses
the sqlcmd utility. At the command prompt, type:
SET SQLCMDPASSWORD= p@a$$w0rd
At the following command prompt, type:
sqlcmd
If the user name and password combination is incorrect, an error message is generated.
NOTE! The OSQLPASSWORD environment variable was kept for backward compatibility. The
SQLCMDPASSWORD environment variable takes precedence over the OSQLPASSWORD environment
variable; this means that sqlcmd and osql can be used next to each other without interference and that old scripts
will continue to work.
If the -P option is used with the -E option, an error message is generated.
If the -P option is followed by more than one argument, an error message is generated and the program exits.
-S [protocol:]server[\instance_name][,port]
Specifies the instance of SQL Server to which to connect. It sets the sqlcmd scripting variable SQLCMDSERVER.
Specify server_name to connect to the default instance of SQL Server on that server computer. Specify
server_name [ \instance_name ] to connect to a named instance of SQL Server on that server computer. If no
server computer is specified, sqlcmd connects to the default instance of SQL Server on the local computer. This
option is required when you execute sqlcmd from a remote computer on the network.
protocol can be tcp (TCP/IP ), lpc (shared memory), or np (named pipes).
If you do not specify a server_name [ \instance_name ] when you start sqlcmd, SQL Server checks for and uses
the SQLCMDSERVER environment variable.
NOTE
The OSQLSERVER environment variable has been kept for backward compatibility. The SQLCMDSERVER environment
variable takes precedence over the OSQLSERVER environment variable; this means that sqlcmd and osql can be used next
to each other without interference and that old scripts will continue to work.
-U login_id
Is the login name or contained database user name. For contained database users you must provide the database
name option (-d).
NOTE
The OSQLUSER environment variable is available for backward compatibility. The SQLCMDUSER environment variable takes
precedence over the OSQLUSER environment variable. This means that sqlcmd and osql can be used next to each other
without interference. It also means that existing osql scripts will continue to work.
If neither the -U option nor the -P option is specified, sqlcmd tries to connect by using Microsoft Windows
Authentication mode. Authentication is based on the Windows account of the user who is running sqlcmd.
If the -U option is used with the -E option (described later in this topic), an error message is generated. If the –U
option is followed by more than one argument, an error message is generated and the program exits.
-z new_password
Change password:
sqlcmd -U someuser -P s0mep@ssword -z a_new_p@a$$w0rd
-Z new_password
Change password and exit:
sqlcmd -U someuser -P s0mep@ssword -Z a_new_p@a$$w0rd
Input/Output Options
-f codepage | i:codepage[,o:codepage] | o:codepage[,i:codepage]
Specifies the input and output code pages. The codepage number is a numeric value that specifies an installed
Windows code page.
Code-page conversion rules:
If no code pages are specified, sqlcmd will use the current code page for both input and output files, unless
the input file is a Unicode file, in which case no conversion is required.
sqlcmd automatically recognizes both big-endian and little-endian Unicode input files. If the -u option has
been specified, the output will always be little-endian Unicode.
If no output file is specified, the output code page will be the console code page. This enables the output to
be displayed correctly on the console.
Multiple input files are assumed to be of the same code page. Unicode and non-Unicode input files can be
mixed.
Enter chcp at the command prompt to verify the code page of Cmd.exe.
-i input_file[,input_file2...]
Identifies the file that contains a batch of SQL statements or stored procedures. Multiple files may be
specified that will be read and processed in order. Do not use any spaces between file names. sqlcmd will
first check to see whether all the specified files exist. If one or more files do not exist, sqlcmd will exit. The -i
and the -Q/-q options are mutually exclusive.
Path examples:
-i C:\<filename>
-i \\<Server>\<Share$>\<filename>
-i "C:\Some Folder\<file name>"
-o C:< filename>
-o \\<Server>\<Share$>\<filename>
-o "C:\Some Folder\<file name>"
sqlcmd -d AdventureWorks2012 -q "SELECT TOP 5 FirstName FROM Person.Person;SELECT TOP 5 LastName FROM
Person.Person;"
IMPORTANT
Do not use the GO terminator in the query.
If -b is specified together with this option, sqlcmd exits on error. -b is described later in this topic.
-Q" cmdline query "
Executes a query when sqlcmd starts and then immediately exits sqlcmd. Multiple-semicolon-delimited queries
can be executed.
Use quotation marks around the query, as shown in the following example.
At the command prompt, type:
sqlcmd -d AdventureWorks2012 -Q "SELECT FirstName, LastName FROM Person.Person WHERE LastName LIKE 'Whi%';"
sqlcmd -d AdventureWorks2012 -Q "SELECT TOP 5 FirstName FROM Person.Person;SELECT TOP 5 LastName FROM
Person.Person;"
IMPORTANT
Do not use the GO terminator in the query.
If -b is specified together with this option, sqlcmd exits on error. -b is described later in this topic.
-t query_timeout
Specifies the number of seconds before a command (or SQL statement) times out. This option sets the sqlcmd
scripting variable SQLCMDSTATTIMEOUT. If a time_out value is not specified, the command does not time out.
The querytime_out must be a number between 1 and 65534. If the value supplied is not numeric or does not fall
into that range, sqlcmd generates an error message.
NOTE
The actual time out value may vary from the specified time_out value by several seconds.
-x
Causes sqlcmd to ignore scripting variables. This is useful when a script contains many INSERT statements that
may contain strings that have the same format as regular variables, such as $(variable_name).
Formatting Options
-h headers
Specifies the number of rows to print between the column headings. The default is to print headings one time for
each set of query results. This option sets the sqlcmd scripting variable SQLCMDHEADERS. Use -1 to specify
that headers must not be printed. Any value that is not valid causes sqlcmd to generate an error message and
then exit.
-k [1 | 2]
Removes all control characters, such as tabs and new line characters from the output. This preserves column
formatting when data is returned. If 1 is specified, the control characters are replaced by a single space. If 2 is
specified, consecutive control characters are replaced by a single space. -k is the same as -k1.
-s col_separator
Specifies the column-separator character. The default is a blank space. This option sets the sqlcmd scripting
variable SQLCMDCOLSEP. To use characters that have special meaning to the operating system such as the
ampersand (&), or semicolon (;), enclose the character in quotation marks ("). The column separator can be any 8-
bit character.
-w column_width
Specifies the screen width for output. This option sets the sqlcmd scripting variable SQLCMDCOLWIDTH. The
column width must be a number greater than 8 and less than 65536. If the specified column width does not fall
into that range, sqlcmd generates and error message. The default width is 80 characters. When an output line
exceeds the specified column width, it wraps on to the next line.
-W
This option removes trailing spaces from a column. Use this option together with the -s option when preparing
data that is to be exported to another application. Cannot be used with the -y or -Y options.
-y variable_length_type_display_width
Sets the sqlcmd scripting variable SQLCMDMAXVARTYPEWIDTH . The default is 256. It limits the number of characters
that are returned for the large variable length data types:
varchar(max)
nvarchar(max)
varbinary(max)
xml
UDT (user-defined data types)
text
ntext
image
NOTE
UDTs can be of fixed length depending on the implementation. If this length of a fixed length UDT is shorter that
display_width, the value of the UDT returned is not affected. However, if the length is longer than display_width, the output
is truncated.
IMPORTANT
Use the -y 0 option with extreme caution because it may cause serious performance issues on both the server and the
network, depending on the size of data returned.
-Y fixed_length_type_display_width
Sets the sqlcmd scripting variable SQLCMDMAXFIXEDTYPEWIDTH . The default is 0 (unlimited). Limits the number of
characters that are returned for the following data types:
char( n ), where 1<=n<=8000
nchar(n n ), where 1<=n<=4000
varchar(n n ), where 1<=n<=8000
nvarchar(n n ), where 1<=n<=4000
varbinary(n n ), where 1<=n<=4000
variant
Error Reporting Options
-b
Specifies that sqlcmd exits and returns a DOS ERRORLEVEL value when an error occurs. The value that is
returned to the DOS ERRORLEVEL variable is 1 when the SQL Server error message has a severity level
greater than 10; otherwise, the value returned is 0. If the -V option has been set in addition to -b, sqlcmd
will not report an error if the severity level is lower than the values set using -V. Command prompt batch
files can test the value of ERRORLEVEL and handle the error appropriately. sqlcmd does not report errors
for severity level 10 (informational messages).
If the sqlcmd script contains an incorrect comment, syntax error, or is missing a scripting variable,
ERRORLEVEL returned is 1.
-m error_level
Controls which error messages are sent to stdout. Messages that have a severity level greater than or
equal to this level are sent. When this value is set to -1, all messages including informational messages, are
sent. Spaces are not allowed between the -m and -1. For example, -m -1 is valid, and -m -1 is not.
This option also sets the sqlcmd scripting variable SQLCMDERRORLEVEL. This variable has a default of
0.
-V error_severity_level
Controls the severity level that is used to set the ERRORLEVEL variable. Error messages that have severity
levels greater than or equal to this value set ERRORLEVEL. Values that are less than 0 are reported as 0.
Batch and CMD files can be used to test the value of the ERRORLEVEL variable.
Miscellaneous Options
-a packet_size
Requests a packet of a different size. This option sets the sqlcmd scripting variable SQLCMDPACKETSIZE.
packet_size must be a value between 512 and 32767. The default = 4096. A larger packet size can enhance
performance for execution of scripts that have lots of SQL statements between GO commands. You can
request a larger packet size. However, if the request is denied, sqlcmd uses the server default for packet
size.
-c batch_terminator
Specifies the batch terminator. By default, commands are terminated and sent to SQL Server by typing the
word "GO" on a line by itself. When you reset the batch terminator, do not use Transact-SQL reserved
keywords or characters that have special meaning to the operating system, even if they are preceded by a
backslash.
-L [c]
Lists the locally configured server computers, and the names of the server computers that are broadcasting
on the network. This parameter cannot be used in combination with other parameters. The maximum
number of server computers that can be listed is 3000. If the server list is truncated because of the size of
the buffer a warning message is displayed.
NOTE
Because of the nature of broadcasting on networks, sqlcmd may not receive a timely response from all servers. Therefore,
the list of servers returned may vary for each invocation of this option.
If the optional parameter c is specified, the output appears without the Servers: header line and each server line is
listed without leading spaces. This is referred to as clean output. Clean output improves the processing
performance of scripting languages.
-p[1]
Prints performance statistics for every result set. The following is an example of the format for performance
statistics:
Network packet size (bytes): n
x xact[s]:
Where:
x = Number of transactions that are processed by SQL Server .
t1 = Total time for all transactions.
t2 = Average time for a single transaction.
t3 = Average number of transactions per second.
All times are in milliseconds.
If the optional parameter 1 is specified, the output format of the statistics is in colon-separated format that can be
imported easily into a spreadsheet or processed by a script.
If the optional parameter is any value other than 1, an error is generated and sqlcmd exits.
-X[1]
Disables commands that might compromise system security when sqlcmd is executed from a batch file. The
disabled commands are still recognized; sqlcmd issues a warning message and continues. If the optional
parameter 1 is specified, sqlcmd generates an error message and then exits. The following commands are
disabled when the -X option is used:
ED
!! command
If the -X option is specified, it prevents environment variables from being passed on to sqlcmd. It also
prevents the startup script specified by using the SQLCMDINI scripting variable from being executed. For
more information about sqlcmd scripting variables, see Use sqlcmd with Scripting Variables.
-?
Displays the version of sqlcmd and a syntax summary of sqlcmd options.
Remarks
Options do not have to be used in the order shown in the syntax section.
When multiple results are returned, sqlcmd prints a blank line between each result set in a batch. In addition, the
<x> rows affected message does not appear when it does not apply to the statement executed.
To use sqlcmd interactively, type sqlcmd at the command prompt with any one or more of the options described
earlier in this topic. For more information, see Use the sqlcmd Utility
NOTE
The options -L, -Q, -Z or -i cause sqlcmd to exit after execution.
The total length of the sqlcmd command line in the command environment (Cmd.exe), including all arguments
and expanded variables, is that which is determined by the operating system for Cmd.exe.
NOTE
To view the environmental variables, in Control Panel, open System, and then click the Advanced tab.
SQLCMDUSER -U R ""
VARIABLE RELATED SWITCH R/W DEFAULT
SQLCMDPASSWORD -P -- ""
SQLCMDSERVER -S R "DefaultLocalInstance"
SQLCMDWORKSTATION -H R "ComputerName"
SQLCMDDBNAME -d R ""
SQLCMDPACKETSIZE -a R "4096"
SQLCMDERRORLEVEL -m R/W 0
SQLCMDINI R ""
sqlcmd Commands
In addition to Transact-SQL statements within sqlcmd, the following commands are also available:
GO [count] :List
[:] ED :Out
[:] !! :Perftrace
:r :Help
:Setvar :Listvar
IMPORTANT
To maintain backward compatibility with existing osql scripts, some of the commands will be recognized without the
colon. This is indicated by the [:].
sqlcmd commands are recognized only if they appear at the start of a line.
All sqlcmd commands are case insensitive.
Each command must be on a separate line. A command cannot be followed by a Transact-SQL statement
or another command.
Commands are executed immediately. They are not put in the execution buffer as Transact-SQL statements
are.
Editing Commands
[:] ED
Starts the text editor. This editor can be used to edit the current Transact-SQL batch, or the last executed
batch. To edit the last executed batch, the ED command must be typed immediately after the last batch has
completed execution.
The text editor is defined by the SQLCMDEDITOR environment variable. The default editor is 'Edit'. To
change the editor, set the SQLCMDEDITOR environment variable. For example, to set the editor to
Microsoft Notepad, at the command prompt, type:
SET SQLCMDEDITOR=notepad
[:] RESET
Clears the statement cache.
:List
Prints the content of the statement cache.
Variables
:Setvar <var> [ "value" ]
Defines sqlcmd scripting variables. Scripting variables have the following format: $(VARNAME) .
Variable names are case insensitive.
Scripting variables can be set in the following ways:
Implicitly using a command-line option. For example, the -l option sets the SQLCMDLOGINTIMEOUT
sqlcmd variable.
Explicitly by using the :Setvar command.
By defining an environment variable before you run sqlcmd.
NOTE
The -X option prevents environment variables from being passed on to sqlcmd.
If a variable defined by using :Setvar and an environment variable have the same name, the variable defined by
using :Setvar takes precedence.
Variable names must not contain blank space characters.
Variable names cannot have the same form as a variable expression, such as $(var).
If the string value of the scripting variable contains blank spaces, enclose the value in quotation marks. If a value
for a scripting variable is not specified, the scripting variable is dropped.
:Listvar
Displays a list of the scripting variables that are currently set.
NOTE
Only scripting variables that are set by sqlcmd, and those that are set using the :Setvar command will be displayed.
Output Commands
:Error
< filename >| STDERR|STDOUT
Redirect all error output to the file specified by file name, to stderr or to stdout. The Error command can appear
multiple times in a script. By default, error output is sent to stderr.
file name
Creates and opens a file that will receive the output. If the file already exists, it will be truncated to zero bytes. If the
file is not available because of permissions or other reasons, the output will not be switched and will be sent to the
last specified or default destination.
STDERR
Switches error output to the stderr stream. If this has been redirected, the target to which the stream has been
redirected will receive the error output.
STDOUT
Switches error output to the stdout stream. If this has been redirected, the target to which the stream has been
redirected will receive the error output.
:Out < filename >| STDERR| STDOUT
Creates and redirects all query results to the file specified by file name, to stderr or to stdout. By default, output is
sent to stdout. If the file already exists, it will be truncated to zero bytes. The Out command can appear multiple
times in a script.
:Perftrace < filename >| STDERR| STDOUT
Creates and redirects all performance trace information to the file specified by file name, to stderr or to stdout.
By default performance trace output is sent to stdout. If the file already exists, it will be truncated to zero bytes.
The Perftrace command can appear multiple times in a script.
Execution Control Commands
:On Error[ exit | ignore]
Sets the action to be performed when an error occurs during script or batch execution.
When the exit option is used, sqlcmd exits with the appropriate error value.
When the ignore option is used, sqlcmd ignores the error and continues executing the batch or script. By default,
an error message will be printed.
[:] QUIT
Causes sqlcmd to exit.
[:] EXIT[ (statement) ]
Lets you use the result of a SELECT statement as the return value from sqlcmd. If numeric, the first column of the
last result row is converted to a 4-byte integer (long). MS -DOS passes the low byte to the parent process or
operating system error level. Windows 200x passes the whole 4-byte integer. The syntax is:
:EXIT(query)
For example:
:EXIT(SELECT @@ROWCOUNT)
You can also include the EXIT parameter as part of a batch file. For example, at the command prompt, type:
sqlcmd -Q "EXIT(SELECT COUNT(*) FROM '%1')"
The sqlcmd utility sends everything between the parentheses () to the server. If a system stored procedure selects
a set and returns a value, only the selection is returned. The EXIT() statement with nothing between the
parentheses executes everything before it in the batch and then exits without a return value.
When an incorrect query is specified, sqlcmd will exit without a return value.
Here is a list of EXIT formats:
:EXIT
Does not execute the batch, and then quits immediately and returns no value.
:EXIT( )
Executes the batch, and then quits and returns no value.
:EXIT(query)
Executes the batch that includes the query, and then quits after it returns the results of the query.
If RAISERROR is used within a sqlcmd script and a state of 127 is raised, sqlcmd will quit and return the
message ID back to the client. For example:
RAISERROR(50001, 10, 127)
This error will cause the sqlcmd script to end and return the message ID 50001 to the client.
The return values -1 to -99 are reserved by SQL Server ; sqlcmd defines the following additional return
values:
GO [count]
GO signals both the end of a batch and the execution of any cached Transact-SQL statements.The batch is
executed multiple times as separate batches; you cannot declare a variable more than once in a single batch.
Miscellaneous Commands
:r < filename >
Parses additional Transact-SQL statements and sqlcmd commands from the file specified by <filename>into the
statement cache.
If the file contains Transact-SQL statements that are not followed by GO, you must enter GO on the line that
follows :r.
NOTE
< filename > is read relative to the startup directory in which sqlcmd was run.
The file will be read and executed after a batch terminator is encountered. You can issue multiple :r commands.
The file may include any sqlcmd command. This includes the batch terminator GO.
NOTE
The line count that is displayed in interactive mode will be increased by one for every :r command encountered. The :r
command will appear in the output of the list command.
:Serverlist
Lists the locally configured servers and the names of the servers broadcasting on the network.
:Connect server_name[\instance_name] [-l timeout] [-U user_name [-P password]]
Connects to an instance of SQL Server . Also closes the current connection.
Time-out options:
0 wait forever
The SQLCMDSERVER scripting variable will reflect the current active connection.
If timeout is not specified, the value of the SQLCMDLOGINTIMEOUT variable is the default.
If only user_name is specified (either as an option, or as an environment variable), the user will be prompted to
enter a password. This is not true if the SQLCMDUSER or SQLCMDPASSWORD environment variables have
been set. If neither options nor environment variables are provided, Windows Authentication mode is used to
login. For example to connect to an instance, instance1 , of SQL Server , myserver , by using integrated security
you would use the following:
:connect myserver\instance1
To connect to the default instance of myserver using scripting variables, you would use the following:
:setvar myusername test
NOTE
The command is executed on the computer on which sqlcmd is running.
USE AdventureWorks2012;
GO
When you press ENTER, the following informational message is printed: "Changed database context to
'AdventureWorks2012'."
Output Format from Transact-SQL Queries
sqlcmd first prints a column header that contains the column names specified in the select list. The column names
are separated by using the SQLCMDCOLSEP character. By default, this is a space. If the column name is shorter
than the column width, the output is padded with spaces up to the next column.
This line will be followed by a separator line that is a series of dash characters. The following output shows an
example.
Start sqlcmd. At the sqlcmd command prompt, type the following:
USE AdventureWorks2012;
FROM Person.Person;
GO
(2 row(s) affected)
Although the BusinessEntityID column is only 4 characters wide, it has been expanded to accommodate the
longer column name. By default, output is terminated at 80 characters. This can be changed by using the -w
option, or by setting the SQLCMDCOLWIDTH scripting variable.
XML Output Format
XML output that is the result of a FOR XML clause is output, unformatted, in a continuous stream.
When you expect XML output, use the following command: :XML ON .
NOTE
sqlcmd returns error messages in the usual format. Notice that the error messages are also output in the XML text stream
in XML format. By using :XML ON , sqlcmd does not display informational messages.
To set the XML mode off, use the following command: :XML OFF .
The GO command should not appear before the XML OFF command is issued because the XML OFF command
switches sqlcmd back to row -oriented output.
XML (streamed) data and rowset data cannot be mixed. If the XML ON command has not been issued before a
Transact-SQL statement that outputs XML streams is executed, the output will be garbled. If the XML ON
command has been issued, you cannot execute Transact-SQL statements that output regular row sets.
NOTE
The :XML command does not support the SET STATISTICS XML statement.
sqlcmd -S Target_DB_or_DW.testsrv.database.windows.net -G -l 30
sqlcmd -S Target_DB_or_DW.testsrv.database.windows.net -U bob@contoso.com -P MyAADPassword -G -l 30
See Also
Start the sqlcmd Utility
Run Transact-SQL Script Files Using sqlcmd
Use the sqlcmd Utility
Use sqlcmd with Scripting Variables
Connect to the Database Engine With sqlcmd
Edit SQLCMD Scripts with Query Editor
Manage Job Steps
Create a CmdExec Job Step
SQLdiag Utility
5/3/2018 • 18 minutes to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
The SQLdiag utility is a general purpose diagnostics collection utility that can be run as a console application or as
a service. You can use SQLdiag to collect logs and data files from SQL Server and other types of servers, and use
it to monitor your servers over time or troubleshoot specific problems with your servers. SQLdiag is intended to
expedite and simplify diagnostic information gathering for Microsoft Customer Support Services.
NOTE
This utility may be changed, and applications or scripts that rely on its command line arguments or behavior may not work
correctly in future releases.
Syntax
sqldiag
{ [/?] }
|
{ [/I configuration_file]
[/O output_folder_path]
[/P support_folder_path]
[/N output_folder_management_option]
[/M machine1 [ machine2 machineN]| @machinelistfile]
[/C file_compression_type]
[/B [+]start_time]
[/E [+]stop_time]
[/A SQLdiag_application_name]
[/T { tcp [ ,port ] | np | lpc } ]
[/Q] [/G] [/R] [/U] [/L] [/X] }
|
{ [START | STOP | STOP_ABORT] }
|
{ [START | STOP | STOP_ABORT] /A SQLdiag_application_name }
Arguments
/?
Displays usage information.
/I configuration_file
Sets the configuration file for SQLdiag to use. By default, /I is set to SQLDiag.Xml.
/O output_folder_path
Redirects SQLdiag output to the specified folder. If the /O option is not specified, SQLdiag output is written to a
subfolder named SQLDIAG under the SQLdiag startup folder. If the SQLDIAG folder does not exist, SQLdiag
attempts to create it.
NOTE
The output folder location is relative to the support folder location that can be specified with /P. To set an entirely different
location for the output folder, specify the full directory path for /O.
/P support_folder_path
Sets the support folder path. By default, /P is set to the folder where the SQLdiag executable resides. The support
folder contains SQLdiag support files, such as the XML configuration file, Transact-SQL scripts, and other files that
the utility uses during diagnostics collection. If you use this option to specify an alternate support files path,
SQLdiag will automatically copy the support files it requires to the specified folder if they do not already exist.
NOTE
To set your current folder as the support path, specify %cd% on the command line as follows:
SQLDIAG /P %cd%
/N output_folder_management_option
Sets whether SQLdiag overwrites or renames the output folder when it starts up. Available options:
1 = Overwrites the output folder (default)
2 = When SQLdiag starts up, it renames the output folder to SQLDIAG_00001, SQLDIAG_00002, and so on.
After renaming the current output folder, SQLdiag writes output to the default output folder SQLDIAG.
NOTE
SQLdiag does not append output to the current output folder when it starts up. It can only overwrite the default output
folder (option 1) or rename the folder (option 2), and then it writes output to the new default output folder named SQLDIAG.
NOTE
SQLdiag automatically prefixes DIAG$ to the instance name specified for SQLdiag_application_name. This provides a
sensible service name if you register SQLdiag as a service.
NOTE
SQLdiag ignores the /L argument if a start time or end time is not specified by using the /B and /E command line
arguments.
Using /L does not imply the service mode. To use /L when running SQLdiag as a service, specify it on the
command line when you register the service.
/X
Runs SQLdiag in snapshot mode. SQLdiag takes a snapshot of all configured diagnostics and then shuts down
automatically.
START | STOP | STOP_ABORT
Starts or stops the SQLdiag service. STOP_ABORT forces the service to shut down as quickly as possible without
finishing collection of diagnostics it is currently collecting.
When these service control arguments are used, they must be the first argument used on the command line. For
example:
SQLDIAG START
Only the /A argument, which specifies a named instance of SQLdiag, can be used with START, STOP, or
STOP_ABORT to control a specific instance of the SQLdiag service. For example:
SQLDIAG START /A SQLdiag_application_name
Security Requirements
Unless SQLdiag is run in generic mode (by specifying the /G command line argument), the user who runs
SQLdiag must be a member of the Windows Administrators group and a member of the SQL Server sysadmin
fixed server role. By default, SQLdiag connects to SQL Server by using Windows Authentication, but it also
supports SQL Server Authentication.
Performance Considerations
The performance effects of running SQLdiag depend on the type of diagnostic data you have configured it to
collect. For example, if you have configured SQLdiag to collect SQL Server Profiler tracing information, the more
event classes you choose to trace, the more your server performance is affected.
The performance impact of running SQLdiag is approximately equivalent to the sum of the costs of collecting the
configured diagnostics separately. For example, collecting a trace with SQLdiag incurs the same performance cost
as collecting it with SQL Server Profiler. The performance impact of using SQLdiag is negligible.
Configuration Files
On startup, SQLdiag reads the configuration file and the command line arguments that have been specified. You
specify the types of diagnostic information that SQLdiag collects in the configuration file. By default, SQLdiag
uses the SQLDiag.Xml configuration file, which is extracted each time the tool runs and is located in the SQLdiag
utility startup folder. The configuration file uses the XML schema, SQLDiag_schema.xsd, which is also extracted
into the utility startup directory from the executable file each time SQLdiag runs.
Editing the Configuration Files
You can copy and edit SQLDiag.Xml to change the types of diagnostic data that SQLdiag collects. When editing
the configuration file always use an XML editor that can validate the configuration file against its XML schema,
such as Management Studio. You should not edit SQLDiag.Xml directly. Instead, make a copy of SQLDiag.Xml and
rename it to a new file name in the same folder. Then edit the new file, and use the /I argument to pass it to
SQLdiag.
Editing the Configuration File When SQLdiag Runs as a Service
If you have already run SQLdiag as a service and need to edit the configuration file, unregister the SQLDIAG
service by specifying the /U command line argument and then re-register the service by using the /R command
line argument. Unregistering and re-registering the service removes old configuration information that was cached
in the Windows registry.
Output Folder
If you do not specify an output folder with the /O argument, SQLdiag creates a subfolder named SQLDIAG under
the SQLdiag startup folder. For diagnostic information collection that involves high volume tracing, such as SQL
Server Profiler , make sure that the output folder is on a local drive with enough space to store the requested
diagnostic output.
When SQLdiag is restarted, it overwrites the contents of the output folder. To avoid this, specify /N 2 on the
command line.
NOTE
Pausing the SQLdiag service is not supported. If you attempt to pause the SQLdiag service, it stops after it finishes
collecting the diagnostics that it was collecting when you paused it. If you restart SQLdiag after stopping it, the application
restarts and overwrites the output folder. To avoid overwriting the output folder, specify /N 2 on the command line.
NOTE
/A is the only command-line argument that can be used with START, STOP, or STOP_ABORT. If you need to specify a
named instance of SQLdiag with one of the service control verbs, specify /A after the control verb on the command line as
shown in the previous syntax example. When control verbs are used, they must be the first argument on the command line.
To stop the service as quickly as possible, run SQLDIAG STOP_ABORT in the utility startup folder. This command
aborts any diagnostics collecting currently being performed without waiting for them to finish.
NOTE
Use SQLDiag STOP or SQLDIAG STOP_ABORT to stop the SQLdiag service. Do not use the Windows Services Console to
stop SQLdiag or other SQL Server services.
When a relative start_time is specified, SQLdiag starts at a time that is relative to the current date and time. When
a relative end_time is specified, SQLdiag ends at a time that is relative to the specified start_time. If the start or
end date and time that you have specified is in the past, SQLdiag forcibly changes the start date so that the start
date and time are in the future.
This has important implications on the start and end dates you choose. Consider the following example:
If the current time is 08:00, the end time passes before diagnostic collection actually begins. Because SQLDiag
automatically adjusts start and end dates to the next day when they occur in the past, in this example diagnostic
collection starts at 09:00 today (a relative start time has been specified with +) and continues collecting until 08:30
the following morning.
Stopping and Restarting SQLdiag to Collect Daily Diagnostics
To collect a specified set of diagnostics on a daily basis without having to manually start and stop SQLdiag, use
the /L argument. The /L argument causes SQLdiag to run continuously by automatically restarting itself after a
scheduled shutdown. When /L is specified, and SQLdiag stops because it has reached the end time specified with
the /E argument, or it stops because it is being run in snapshot mode by using the /X argument, SQLdiag restarts
instead of exiting.
The following example specifies that SQLdiag run in continuous mode to automatically restart after diagnostic
data collecting occurs between 03:00:00 and 05:00:00.
The following example specifies that SQLdiag run in continuous mode to automatically restart after taking a
diagnostic data snapshot at 03:00:00.
sqldiag /B 03:00:00 /X /L
You can also use the net start command to start the SQLDIAG service.
When you restart SQLdiag, it overwrites the contents in the current output folder. To avoid this, specify /N 2 on
the command line to rename the output folder when the utility starts.
Pausing the SQLdiag service is not supported.
NOTE
To collect SQL Server Profiler trace information from clustered SQL Server instances, administrative shares (ADMIN$) must be
enabled on the cluster.
See Also
Command Prompt Utility Reference (Database Engine)
sqlmaint Utility
5/3/2018 • 10 minutes to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
Thesqlmaint utility performs a specified set of maintenance operations on one or more databases. Use sqlmaint
to run DBCC checks, back up a database and its transaction log, update statistics, and rebuild indexes. All database
maintenance activities generate a report that can be sent to a designated text file, HTML file, or e-mail account.
sqlmaint executes database maintenance plans created with previous versions of SQL Server. To run SQL Server
maintenance plans from the command prompt, use the dtexec Utility.
IMPORTANT
This feature will be removed in the next version of Microsoft SQL Server. Avoid using this feature in new development work,
and plan to modify applications that currently use this feature. Use SQL Server maintenance plan feature instead. For more
information on maintenance plans, see Maintenance Plans.
Syntax
sqlmaint
[-?] |
[
[-S server_name[\instance_name]]
[-U login_ID [-P password]]
{
[-D database_name | -PlanName name | -PlanID guid ]
[-Rpt text_file]
[-To operator_name]
[-HtmlRpt html_file [-DelHtmlRpt <time_period>] ]
[-RmUnusedSpace threshold_percentfree_percent]
[-CkDB | -CkDBNoIdx]
[-CkAl | -CkAlNoIdx]
[-CkCat]
[-UpdOptiStats sample_percent]
[-RebldIdx free_space]
[-SupportComputedColumn]
[-WriteHistory]
[
{-BkUpDB [backup_path] | -BkUpLog [backup_path] }
{-BkUpMedia
{DISK [
[-DelBkUps <time_period>]
[-CrBkSubDir ]
[-UseDefDir ]
]
| TAPE
}
}
[-BkUpOnlyIfClean]
[-VrfyBackup]
]
}
]
<time_period> ::=
number[minutes | hours | days | weeks | months]
Arguments
The parameters and their values must be separated by a space. For example, there must be a space between -S and
server_name.
-?
Specifies that the syntax diagram for sqlmaint be returned. This parameter must be used alone.
-S server_name[ \instance_name]
Specifies the target instance of Microsoft SQL Server. Specify server_name to connect to the default instance of
SQL Server Database Engine on that server. Specify server_name\instance_name to connect to a named instance
of Database Engine on that server. If no server is specified, sqlmaint connects to the default instance of Database
Engine on the local computer.
-U login_ID
Specifies the login ID to use when connecting to the server. If not supplied, sqlmaint attempts to use Microsoft
Windows Authentication. If login_ID contains special characters, it must be enclosed in double quotation marks (");
otherwise, the double quotation marks are optional.
IMPORTANT
When possible, use Windows Authentication.
-P password
Specifies the password for the login ID. Only valid if the -U parameter is also supplied. If password contains special
characters, it must be enclosed in double quotation marks; otherwise, the double quotation marks are optional.
IMPORTANT
The password is not masked. When possible, use Windows Authentication.
-D database_name
Specifies the name of the database in which to perform the maintenance operation. If database_name contains
special characters, it must be enclosed in double quotation marks; otherwise, the double quotation marks are
optional.
-PlanName name
Specifies the name of a database maintenance plan defined using the Database Maintenance Plan Wizard. The
only information sqlmaint uses from the plan is the list of the databases in the plan. Any maintenance activities
you specify in the other sqlmaint parameters are applied to this list of databases.
-PlanID guid
Specifies the globally unique identifier (GUID ) of a database maintenance plan defined using the Database
Maintenance Plan Wizard. The only information sqlmaint uses from the plan is the list of the databases in the
plan. Any maintenance activities you specify in the other sqlmaint parameters are applied to this list of databases.
This must match a plan_id value in msdb.dbo.sysdbmaintplans.
-Rpt text_file
Specifies the full path and name of the file into which the report is to be generated. The report is also generated on
the screen. The report maintains version information by adding a date to the file name. The date is generated as
follows: at the end of the file name but before the period, in the form _yyyyMMddhhmm. yyyy = year, MM =
month, dd = day, hh = hour, mm = minute.
If you run the utility at 10:23 A.M. on December 1, 1996, and this is the text_file value:
The full Universal Naming Convention (UNC ) file name is required for text_file when sqlmaint accesses a remote
server.
-To operator_name
Specifies the operator to whom the generated report is sent through SQL Mail.
-HtmlRpt html_file
Specifies the full path and name of the file into which an HTML report is to be generated. sqlmaint generates the
file name by appending a string of the format _yyyyMMddhhmm to the file name, just as it does for the -Rpt
parameter.
The full UNC file name is required for html_file when sqlmaint accesses a remote server.
-DelHtmlRpt <time_period>
Specifies that any HTML report in the report directory be deleted if the time interval after the creation of the
report file exceeds <time_period>. -DelHtmlRpt looks for files whose name fits the pattern generated from the
html_file parameter. If html_file is c:\Program Files\Microsoft SQL
Server\Mssql\Backup\AdventureWorks2012_maint.htm, then -DelHtmlRpt causes sqlmaint to delete any files
whose names match the pattern C:\Program Files\Microsoft SQL
Server\Mssql\Backup\AdventureWorks2012_maint*.htm and that are older than the specified <time_period>.
-RmUnusedSpace threshold_percent free_percent
Specifies that unused space be removed from the database specified in -D. This option is only useful for databases
that are defined to grow automatically. Threshold_percent specifies in megabytes the size that the database must
reach before sqlmaint attempts to remove unused data space. If the database is smaller than the
threshold_percent, no action is taken. Free_percent specifies how much unused space must remain in the database,
specified as a percentage of the final size of the database. For example, if a 200-MB database contains 100 MB of
data, specifying 10 for free_percent results in the final database size being 110 MB. Note that a database is not
expanded if it is smaller than free_percent plus the amount of data in the database. For example, if a 108-MB
database has 100 MB of data, specifying 10 for free_percent does not expand the database to 110 MB; it remains
at 108 MB.
-CkDB | -CkDBNoIdx
Specifies that a DBCC CHECKDB statement or a DBCC CHECKDB statement with the NOINDEX option be run in
the database specified in -D. For more information, see DBCC CHECKDB.
A warning is written to text_file if the database is in use when sqlmaint runs.
-CkAl | -CkAlNoIdx
Specifies that a DBCC CHECKALLOC statement with the NOINDEX option be run in the database specified in -D.
For more information, see DBCC CHECKALLOC (Transact-SQL ).
-CkCat
Specifies that a DBCC CHECKCATALOG (Transact-SQL ) statement be run in the database specified in -D. For
more information, see DBCC CHECKCATALOG (Transact-SQL ).
-UpdOptiStats sample_percent
Specifies that the following statement be run on each table in the database:
If the tables contain computed columns, you must also specify the -SupportedComputedColumn argument
when you use -UpdOptiStats.
For more information, see UPDATE STATISTICS (Transact-SQL ).
-RebldIdx free_space
Specifies that indexes on tables in the target database should be rebuilt by using the free_space percent value as
the inverse of the fill factor. For example, if free_space percentage is 30, then the fill factor used is 70. If a
free_space percentage value of 100 is specified, then the indexes are rebuilt with the original fill factor value.
If the indexes are on computed columns, you must also specify the -SupportComputedColumn argument when
you use -RebldIdx.
-SupportComputedColumn
Must be specified to run DBCC maintenance commands with sqlmaint on computed columns.
-WriteHistory
Specifies that an entry be made in msdb.dbo.sysdbmaintplan_history for each maintenance action performed by
sqlmaint. If -PlanName or -PlanID is specified, the entries in sysdbmaintplan_history use the ID of the specified
plan. If -D is specified, the entries in sysdbmaintplan_history are made with zeroes for the plan ID.
-BkUpDB [ backup_path] | -BkUpLog [ backup_path ]
Specifies a backup action. -BkUpDb backs up the entire database. -BkUpLog backs up only the transaction log.
backup_path specifies the directory for the backup. backup_path is not needed if -UseDefDir is also specified, and
is overridden by -UseDefDir if both are specified. The backup can be placed in a directory or a tape device address
(for example, \\.\TAPE0). The file name for a database backup is generated automatically as follows:
dbname_db_yyyyMMddhhmm.BAK
where
dbname is the name of the database being backed up.
yyyyMMddhhmm is the time of the backup operation with yyyy = year, MM = month, dd = day, hh = hour,
and mm = minute.
The file name for a transaction backup is generated automatically with a similar format:
dbname_log_yyyymmddhhmm.BAK
If you use the -BkUpDB parameter, you must also specify the media by using the -BkUpMedia parameter.
-BkUpMedia
Specifies the media type of the backup, either DISK or TAPE.
DISK
Specifies that the backup medium is disk.
-DelBkUps< time_period >
For disk backups, specifies that any backup file in the backup directory be deleted if the time interval after the
creation of the backup exceeds the <time_period>.
-CrBkSubDir
For disk backups, specifies that a subdirectory be created in the [backup_path] directory or in the default backup
directory if -UseDefDir is also specified. The name of the subdirectory is generated from the database name
specified in -D. -CrBkSubDir offers an easy way to put all the backups for different databases into separate
subdirectories without having to change the backup_path parameter.
-UseDefDir
For disk backups, specifies that the backup file be created in the default backup directory. UseDefDir overrides
backup_path if both are specified. With a default Microsoft SQL Server setup, the default backup directory is
C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Backup.
TAPE
Specifies that the backup medium is tape.
-BkUpOnlyIfClean
Specifies that the backup occur only if any specified -Ck checks did not find problems with the data. Maintenance
actions run in the same sequence as they appear in the command prompt. Specify the parameters -CkDB, -
CkDBNoIdx, -CkAl, -CkAlNoIdx, -CkTxtAl, or -CkCat before the -BkUpDB/-BkUpLog parameter(s) if you are
also going to specify -BkUpOnlyIfClean, or the backup occurs whether or not the check reports problems.
-VrfyBackup
Specifies that RESTORE VERIFYONLY be run on the backup when it completes.
number[minutes| hours| day| weeks| months]
Specifies the time interval used to determine if a report or backup file is old enough to be deleted. number is an
integer followed (without a space) by a unit of time. Valid examples:
12weeks
3months
15days
If only number is specified, the default date part is weeks.
Remarks
The sqlmaint utility performs maintenance operations on one or more databases. If -D is specified, the operations
specified in the remaining switches are performed only on the specified database. If -PlanName or -PlanID are
specified, the only information sqlmaint retrieves from the specified maintenance plan is the list of databases in
the plan. All operations specified in the remaining sqlmaint parameters are applied against each database in the
list obtained from the plan. The sqlmaint utility does not apply any of the maintenance activities defined in the
plan itself.
The sqlmaint utility returns 0 if it runs successfully or 1 if it fails. Failure is reported:
If any of the maintenance actions fail.
If -CkDB, -CkDBNoIdx, -CkAl, -CkAlNoIdx, -CkTxtAl, or -CkCat checks find problems with the data.
If a general failure is encountered.
Permissions
The sqlmaint utility can be executed by any Windows user with Read and Execute permission on sqlmaint.exe ,
which by default is stored in the x:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER1\MSSQL\Binn folder.
Additionally, the SQL Server login specified with -login_ID must have the SQL Server permissions required to
perform the specified action. If the connection to SQL Server uses Windows Authentication, the SQL Server login
mapped to the authenticated Windows user must have the SQL Server permissions required to perform the
specified action.
For example, using the -BkUpDB requires permission to execute the BACKUP statement. And using the -
UpdOptiStats argument requires permission to execute the UPDATE STATISTICS statement. For more
information, see the "Permissions" sections of the corresponding topics in Books Online.
Examples
A. Performing DBCC checks on a database
B. Updating statistics using a 15% sample in all databases in a plan. Also, shrink any of the database that have
reached 110 MB to having only 10% free space
C. Backing up all the databases in a plan to their individual subdirectories in the default x:\Program
Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Backup directory. Also, delete any backups
older than 2 weeks
sqlmaint -S MyServer -PlanName MyUserDBPlan -BkUpDB -BkUpMedia DISK -UseDefDir -CrBkSubDir -DelBkUps 2weeks
See Also
BACKUP (Transact-SQL )
UPDATE STATISTICS (Transact-SQL )
sqllogship Application
5/3/2018 • 4 minutes to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
The sqllogship application performs a backup, copy, or restore operation and associated clean-up tasks for a log
shipping configuration. The operation is performed on a specific instance of Microsoft SQL Server for a specific
database.
For the syntax conventions, see Command Prompt Utility Reference (Database Engine).
Syntax
sqllogship -server instance_name { -backup primary_id | -copy secondary_id | -restore secondary_id } [ –
verboselevel level ] [ –logintimeout timeout_value ] [ -querytimeout timeout_value ]
Arguments
-server instance_name
Specifies the instance of SQL Server where the operation will run. The server instance to specify depends on
which log-shipping operation is being specified. For -backup, instance_name must be the name of the primary
server in a log shipping configuration. For -copy or -restore, instance_name must be the name of a secondary
server in a log shipping configuration.
-backup primary_id
Performs a backup operation for the primary database whose primary ID is specified by primary_id. You can
obtain this ID by selecting it from the log_shipping_primary_databases system table or by using the
sp_help_log_shipping_primary_database stored procedure.
The backup operation creates the log backup in the backup directory. The sqllogship application then cleans out
any old backup files, based on the file retention period. Next, the application logs history for the backup operation
on the primary server and the monitor server. Finally, the application runs sp_cleanup_log_shipping_history, which
cleans out old history information, based on the retention period.
-copy secondary_id
Performs a copy operation to copy backups from the specified secondary server for the secondary database, or
databases, whose secondary ID is specified by secondary_id. You can obtain this ID by selecting it from the
log_shipping_secondary system table or by using the sp_help_log_shipping_secondary_database stored procedure.
The operation copies the backup files from the backup directory to the destination directory. The sqllogship
application then logs the history for the copy operation on the secondary server and the monitor server.
-restore secondary_id
Performs a restore operation on the specified secondary server for the secondary database, or databases, whose
secondary ID is specified by secondary_id. You can obtain this ID by using the
sp_help_log_shipping_secondary_database stored procedure.
Any backup files in the destination directory that were created after the most recent restore point are restored to
the secondary database, or databases. The sqllogship application then cleans out any old backup files, based on
the file retention period. Next, the application logs history for the restore operation on the secondary server and
the monitor server. Finally, the application runs sp_cleanup_log_shipping_history, which cleans out old history
information, based on the retention period.
–verboselevel level
Specifies the level of messages added to the log shipping history. level is one of the following integers:
LEVEL DESCRIPTION
–logintimeout timeout_value
Specifies the amount of time allotted for attempting to log in to the server instance before the attempt times out.
The default is 15 seconds. timeout_value is int.
-querytimeout timeout_value
Specifies the amount of time allotted for starting the specified operation before the attempt times out. The default
is no timeout period. timeout_value is int.
Remarks
We recommend that you use the backup, copy, and restore jobs to perform the backup, copy and restore when
possible. To start these jobs from a batch operation or other application, call the sp_start_job stored procedure.
The log shipping history created by sqllogship is interspersed with the history created by log shipping backup,
copy, and restore jobs. If you plan to use sqllogship repeatedly to perform backup, copy, or restore operations for
a log shipping configuration, consider disabling the corresponding log shipping job or jobs. For more information,
see Disable or Enable a Job.
The sqllogship application, SqlLogShip.exe, is installed in the x:\Program Files\Microsoft SQL
Server\130\Tools\Binn directory.
Permissions
sqllogship uses Windows Authentication. The Windows Authentication account where the command is run
requires Windows directory access and SQL Server permissions. The requirement depends on whether the
sqllogship command specifies the -backup, -copy, or -restore option.
-backup Requires read/write access to the Requires the same permissions as the
backup directory. BACKUP statement. For more
information, see BACKUP (Transact-
SQL).
OPTION DIRECTORY ACCESS PERMISSIONS
-copy Requires read access to the backup Requires the same permissions as the
directory and write access to the copy sp_help_log_shipping_secondary_databa
directory. se stored procedure.
-restore Requires read/write access to the copy Requires the same permissions as the
directory. RESTORE statement. For more
information, see RESTORE (Transact-
SQL).
NOTE
To find out the paths of the backup and copy directories, you can run the sp_help_log_shipping_secondary_database
stored procedure or view the log_shipping_secondary table in msdb. The paths of the backup directory and destination
directory are in the backup_source_directory and backup_destination_directory columns, respectively.
See Also
About Log Shipping (SQL Server)
log_shipping_primary_databases (Transact-SQL )
log_shipping_secondary (Transact-SQL )
sp_cleanup_log_shipping_history (Transact-SQL )
sp_help_log_shipping_primary_database (Transact-SQL )
sp_help_log_shipping_secondary_database (Transact-SQL )
sp_start_job (Transact-SQL )
sqlps Utility
5/3/2018 • 3 minutes to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
The sqlps utility starts a Windows PowerShell session with the SQL Server PowerShell provider and cmdlets
loaded and registered. You can enter PowerShell commands or scripts that use the SQL Server PowerShell
components to work with instances of SQL Server and their objects.
IMPORTANT
This feature is in maintenance mode and may be removed in a future version of Microsoft SQL Server. Avoid using this
feature in new development work, and plan to modify applications that currently use this feature. Use the sqlps PowerShell
module instead. For more information about the sqlps module, see Import the SQLPS Module.
Syntax
sqlps
[ [ [ -NoLogo ][ -NoExit ][ -NoProfile ]
[ -OutPutFormat { Text | XML } ] [ -InPutFormat { Text | XML } ]
]
[ -Command { -
| script_block [ -args argument_array ]
| string [ command_parameters ]
}
]
]
[ -? | -Help ]
Arguments
-NoLogo
Specifies that the sqlps utility hide the copyright banner when it starts.
-NoExit
Specifies that the sqlps utility continue running after the startup commands have completed.
-NoProfile
Specifies that the sqlps utility not load a user profile. User profiles record commonly used aliases, functions, and
variables for use across PowerShell sessions.
-OutPutFormat { Text | XML }
Specifies that the sqlps utility output be formatted as either text strings (Text) or in a serialized CLIXML format
(XML ).
-InPutFormat { Text | XML }
Specifies that input to the sqlps utility is formatted as either text strings (Text) or in a serialized CLIXML format
(XML ).
-Command
Specifies the command for the sqlps utility to run. The sqlps utility runs the command and then exits, unless -
NoExit is also specified. Do not specify any other switches after -Command, they will be read as command
parameters.
-Command- specifies that the sqlps utility read the input from the standard input.
script_block [ -argsargument_array ]
Specifies a block of PowerShell commands to run, the block must be enclosed in braces: {}. Script_block can only be
specified when the sqlps utility is called from either PowerShell or another sqlps utility session. The
argument_array is an array of PowerShell variables containing the arguments for the PowerShell commands in
the script_block.
string [ command_parameters ]
Specifies a string that contains the PowerShell commands to be run. Use the format "& {command}". The
quotation marks indicate a string, and the invoke operator (&) causes the sqlps utility to run the command.
[ -? | -Help ]
Shows the syntax summary of the sqlps utility options.
Remarks
The sqlps utility starts the PowerShell environment (PowerShell.exe) and loads the SQL Server PowerShell
module. The module, also named sqlps, loads and registers these SQL Server PowerShell snap-ins:
Microsoft.SqlServer.Management.PSProvider.dll
Implements the SQL Server PowerShell provider and associated cmdlets such as Encode-SqlName and
Decode-SqlName.
Microsoft.SqlServer.Management.PSSnapin.dll
Implements the Invoke-Sqlcmd and Invoke-PolicyEvaluation cmdlets.
You can use the sqlps utility to do the following:
Interactively run PowerShell commands.
Run PowerShell script files.
Run SQL Server cmdlets.
Use the SQL Server provider paths to navigate through the hierarchy of SQL Server objects.
By default, the sqlps utility runs with the scripting execution policy set to Restricted. This prevents running
any PowerShell scripts. You can use the Set-ExecutionPolicy cmdlet to enable running signed scripts, or
any scripts. Only run scripts from trusted sources, and secure all input and output files by using the
appropriate NTFS permissions. For more information about enabling PowerShell scripts, see Running
Windows PowerShell Scripts.
The version of the sqlps utility in SQL Server 2008 and SQL Server 2008 R2 was implemented as a
Windows PowerShell 1.0 mini-shell. Mini-shells have certain restrictions, such as not allowing users to load
snap-ins other than those loaded by the mini-shell. These restrictions do not apply to the SQL Server 2012
(11.x) and higher versions of the utility, which have been changed to use the sqlps module.
Examples
A. Run the sqlps utility in default, interactive mode without the copyright banner
sqlps -NoLogo
B. Run a SQL Server PowerShell script from the command prompt
C. Run a SQL Server PowerShell script from the command prompt, and keep running after the script
completes
See Also
Enable or Disable a Server Network Protocol
SQL Server PowerShell
sqlservr Application
5/3/2018 • 5 minutes to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
The sqlservr application starts, stops, pauses, and continues an instance of Microsoft SQL Server from a
command prompt.
Syntax
sqlservr [-sinstance_name] [-c] [-dmaster_path] [-f]
[-eerror_log_path] [-lmaster_log_path] [-m]
[-n] [-Ttrace#] [-v] [-x] [-gnumber]
Arguments
-s instance_name
Specifies the instance of SQL Server to connect to. If no named instance is specified, sqlservr starts the default
instance of SQL Server.
IMPORTANT
When starting an instance of SQL Server, you must use the sqlservr application in the appropriate directory for that
instance. For the default instance, run sqlservr from the \MSSQL\Binn directory. For a named instance, run sqlservr from
the \MSSQL$instance_name\Binn directory.
-c
Indicates that an instance of SQL Server is started independently of the Windows Service Control Manager. This
option is used when starting SQL Server from a command prompt, to shorten the amount of time it takes for SQL
Server to start.
NOTE
When you use this option, you cannot stop SQL Server by using SQL Server Service Manager or the net stop command, and
if you log off the computer, SQL Server is stopped.)
-d master_path
Indicates the fully qualified path for the master database file. There are no spaces between -d and master_path. If
you do not provide this option, the existing registry parameters are used.
-f
Starts an instance of SQL Server with minimal configuration. This is useful if the setting of a configuration value
(for example, over-committing memory) has prevented the server from starting.
-e error_log_path
Indicates the fully qualified path for the error log file. If not specified, the default location is <Drive>:\Program
Files\Microsoft SQL Server\MSSQL\Log\Errorlog for the default instance and <Drive>:\Program Files\Microsoft
SQL Server\MSSQL$instance_name\Log\Errorlog for a named instance. There are no spaces between -e and
error_log_path.
-l master_log_path
Indicates the fully qualified path for the master database transaction log file. There are no spaces between -l and
master_log_path.
-m
Indicates to start an instance of SQL Server in single-user mode. Only a single user can connect when SQL Server
is started in single-user mode. The CHECKPOINT mechanism, which guarantees that completed transactions are
regularly written from the disk cache to the database device, is not started. (Typically, this option is used if you
experience problems with system databases that require repair.) Enables the sp_configure allow updates option.
By default, allow updates is disabled.
-n
Allows you to start a named instance of SQL Server. Without the -s parameter set, the default instance attempts to
start. You must switch to the appropriate BINN directory for the instance at a command prompt before starting
sqlservr.exe. For example, if Instance1 were to use \mssql$Instance1 for its binaries, the user must be in the
\mssql$Instance1\binn directory to start sqlservr.exe -s instance1. If you start an instance of SQL Server with
the -n option, it is advisable to use the -e option too, or SQL Server events are not logged.
-T trace#
Indicates that an instance of SQL Server should be started with a specified trace flag (trace#) in effect. Trace flags
are used to start the server with nonstandard behavior. For more information, see Trace Flags (Transact-SQL ).
IMPORTANT
When specifying a trace flag, use -T to pass the trace flag number. A lowercase t (-t) is accepted by SQL Server; however, -t
sets other internal trace flags required by SQL Server support engineers.
-v
Displays the server version number.
-x
Disables the keeping of CPU time and cache-hit ratio statistics. Allows maximum performance.
-g memory_to_reserve
Specifies an integer number of megabytes (MB ) of memory that SQL Server leaves available for memory
allocations within the SQL Server process, but outside the SQL Server memory pool. The memory outside of the
memory pool is the area used by SQL Server for loading items such as extended procedure .dll files, the OLE
DB providers referenced by distributed queries, and automation objects referenced in Transact-SQL statements.
The default is 256 MB.
Use of this option may help tune memory allocation, but only when physical memory exceeds the configured limit
set by the operating system on virtual memory available to applications. Use of this option may be appropriate in
large memory configurations in which the memory usage requirements of SQL Server are atypical and the virtual
address space of the SQL Server process is totally in use. Incorrect use of this option can lead to conditions under
which an instance of SQL Server may not start or may encounter run-time errors.
Use the default for the -g parameter unless you see any of the following warnings in the SQL Server error log:
"Failed Virtual Allocate Bytes: FAIL_VIRTUAL_RESERVE <size>"
"Failed Virtual Allocate Bytes: FAIL_VIRTUAL_COMMIT <size>"
These messages may indicate that SQL Server is trying to free parts of the SQL Server memory pool in
order to find space for items such as extended stored procedure .dll files or automation objects. In this case,
consider increasing the amount of memory reserved by the -g switch.
Using a value lower than the default increases the amount of memory available to the buffer pool and
thread stacks; this may, in turn, provide some performance benefit to memory-intensive workloads in
systems that do not use many extended stored procedures, distributed queries, or automation objects.
Remarks
In most cases, the sqlservr.exe program is only used for troubleshooting or major maintenance. When SQL Server
is started from the command prompt with sqlservr.exe, SQL Server does not start as a service, so you cannot stop
SQL Server using net commands. Users can connect to SQL Server, but SQL Server tools show the status of the
service, so SQL Server Configuration Manager correctly indicates that the service is stopped. SQL Server
Management Studio can connect to the server, but it also indicates that the service is stopped.
Compatibility Support
The -h parameter is not supported in SQL Server 2017. This parameter was used in earlier versions of 32-bit
instances of SQL Server to reserve virtual memory address space for Hot Add memory metadata when AWE is
enabled. For more information, see Discontinued SQL Server Features in SQL Server 2016.
See Also
Database Engine Service Startup Options
tablediff Utility
5/3/2018 • 5 minutes to read • Edit Online
THIS TOPIC APPLIES TO: SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel
Data Warehouse
The tablediff utility is used to compare the data in two tables for non-convergence, and is particularly useful for
troubleshooting non-convergence in a replication topology. This utility can be used from the command prompt or
in a batch file to perform the following tasks:
A row by row comparison between a source table in an instance of Microsoft SQL Server acting as a
replication Publisher and the destination table at one or more instances of SQL Server acting as replication
Subscribers.
Perform a fast comparison by only comparing row counts and schema.
Perform column-level comparisons.
Generate a Transact-SQL script to fix discrepancies at the destination server to bring the source and
destination tables into convergence.
Log results to an output file or into a table in the destination database.
Syntax
tablediff
[ -? ] |
{
-sourceserver source_server_name[\instance_name]
-sourcedatabase source_database
-sourcetable source_table_name
[ -sourceschema source_schema_name ]
[ -sourcepassword source_password ]
[ -sourceuser source_login ]
[ -sourcelocked ]
-destinationserver destination_server_name[\instance_name]
-destinationdatabase subscription_database
-destinationtable destination_table
[ -destinationschema destination_schema_name ]
[ -destinationpassword destination_password ]
[ -destinationuser destination_login ]
[ -destinationlocked ]
[ -b large_object_bytes ]
[ -bf number_of_statements ]
[ -c ]
[ -dt ]
[ -et table_name ]
[ -f [ file_name ] ]
[ -o output_file_name ]
[ -q ]
[ -rc number_of_retries ]
[ -ri retry_interval ]
[ -strict ]
[ -t connection_timeouts ]
}
Arguments
[ -? ]
Returns the list of supported parameters.
-sourceserver source_server_name[\instance_name]
Is the name of the source server. Specify source_server_name for the default instance of SQL Server. Specify
source_server_name\instance_name for a named instance of SQL Server.
-sourcedatabase source_database
Is the name of the source database.
-sourcetable source_table_name
Is the name of the source table being checked.
-sourceschema source_schema_name
The schema owner of the source table. By default, the table owner is assumed to be dbo.
-sourcepassword source_password
Is the password for the login used to connect to the source server using SQL Server Authentication.
IMPORTANT
When possible, supply security credentials at runtime. If you must store credentials in a script file, you should secure the file
to prevent unauthorized access.
-sourceuser source_login
Is the login used to connect to the source server using SQL Server Authentication. If source_login is not supplied,
then Windows Authentication is used when connecting to the source server. When possible, use Windows
Authentication.
-sourcelocked
The source table is locked during the comparison using the TABLOCK and HOLDLOCK table hints.
-destinationserver destination_server_name[\instance_name]
Is the name of the destination server. Specify destination_server_name for the default instance of SQL Server.
Specify destination_server_name\instance_name for a named instance of SQL Server.
-destinationdatabase subscription_database
Is the name of the destination database.
-destinationtable destination_table
Is the name of the destination table.
-destinationschema destination_schema_name
The schema owner of the destination table. By default, the table owner is assumed to be dbo.
-destinationpassword destination_password
Is the password for the login used to connect to the destination server using SQL Server Authentication.
IMPORTANT
When possible, supply security credentials at runtime. If you must store credentials in a script file, you should secure the file
to prevent unauthorized access.
-destinationuser destination_login
Is the login used to connect to the destination server using SQL Server Authentication. If destination_login is not
supplied, then Windows Authentication is used when connecting to the server. When possible, use Windows
Authentication.
-destinationlocked
The destination table is locked during the comparison using the TABLOCK and HOLDLOCK table hints.
-b large_object_bytes
Is the number of bytes to compare for large object data type columns, which includes: text, ntext, image,
varchar(max), nvarchar(max) and varbinary(max). large_object_bytes defaults to the size of the column. Any
data above large_object_bytes will not be compared.
-bf number_of_statements
Is the number of Transact-SQL statements to write to the current Transact-SQL script file when the -f option is
used. When the number of Transact-SQL statements exceeds number_of_statements, a new Transact-SQL script
file is created.
-c
Compare column-level differences.
-dt
Drop the result table specified by table_name, if the table already exists.
-et table_name
Specifies the name of the result table to create. If this table already exists, -DT must be used or the operation will
fail.
-f [ file_name ]
Generates a Transact-SQL script to bring the table at the destination server into convergence with the table at the
source server. You can optionally specify a name and path for the generated Transact-SQL script file. If file_name is
not specified, the Transact-SQL script file is generated in the directory where the utility runs.
-o output_file_name
Is the full name and path of the output file.
-q
Perform a fast comparison by only comparing row counts and schema.
-rc number_of_retries
Number of times that the utility retries a failed operation.
-ri retry_interval
Interval, in seconds, to wait between retries.
-strict
Source and destination schema are strictly compared.
-t connection_timeouts
Sets the connection timeout period, in seconds, for connections to the source server and destination server.
Return Value
VALUE DESCRIPTION
0 Success
1 Critical error
VALUE DESCRIPTION
2 Table differences
Remarks
The tablediff utility cannot be used with non- SQL Server servers.
Tables with sql_variant data type columns are not supported.
By default, the tablediff utility supports the following data type mappings between source and destination
columns.
int bigint
timestamp varbinary
varchar(max) text
nvarchar(max) ntext
varbinary(max) image
text varchar(max)
ntext nvarchar(max)
image varbinary(max)
Use the -strict option to disallow these mappings and perform a strict validation.
The source table in the comparison must contain at least one primary key, identity, or ROWGUID column. When
you use the -strict option, the destination table must also have a primary key, identity, or ROWGUID column.
The Transact-SQL script generated to bring the destination table into convergence does not include the following
data types:
varchar(max)
nvarchar(max)
varbinary(max)
timestamp
xml
text
ntext
image
Permissions
To compare tables, you need SELECT ALL permissions on the table objects being compared.
To use the -et option, you must be a member of the db_owner fixed database role, or at least have CREATE TABLE
permission in the subscription database and ALTER permission on the destination owner schema at the
destination server.
To use the -dt option, you must be a member of the db_owner fixed database role, or at least have ALTER
permission on the destination owner schema at the destination server.
To use the -o or -f options, you must have write permissions to the specified file directory location.
See Also
Compare Replicated Tables for Differences (Replication Programming)
Download and install sqlpackage
6/28/2018 • 2 minutes to read • Edit Online
For details about the latest release, see the release notes.
cd ~
mkdir sqlpackage
unzip ~/Downloads/sqlpackage-linux-<version string>.zip ~/sqlpackage
echo 'export PATH="$PATH:~/sqlpackage"' >> ~/.bashrc
source ~/.bashrc
sqlpackage
NOTE
On Debian, Redhat, and Ubuntu, you may have missing dependencies. Use the following commands to install these
dependencies depending on your version of Linux:
Debian:
Redhat:
Ubuntu:
Next Steps
Learn more about sqlpackage
Microsoft Privacy Statement
sqlpackage release notes
6/28/2018 • 2 minutes to read • Edit Online
sqlpackage 17.8
Release date: June 22, 2018
Build: 14.0.4079.2
The release includes the following fixes:
Improved error messages for connection failures, including the SqlClient exception message.
Added MaxParallelism command-line parameter to specify the degree of parallelism for database operations.
Support index compression on single partition indexes for import/export.
Fixed a reverse engineering issue for XML column sets with SQL 2017 and later.
Fixed an issue where scripting the database compatibility level 140 was ignored for Azure SQL Database.
sqlpackage 17.4.1
Release date: January 25, 2018
Build: 14.0.3917.1
The release includes the following fixes:
When importing an Azure SQL Database .bacpac to an on-premise instance, fixed errors due to 'Database
master keys without password are not supported in this version of SQL Server'.
Database catalog collation support.
Fixed an unresolved pseudo column error for graph tables.
Added ThreadMaxStackSize command-line parameter to parse TSQL with a large number of nested
statements.
Fixed using the SchemaCompareDataModel with SQL authentication to compare schemas.
sqlpackage 17.4.0
Release date: December 12, 2017
Build: 14.0.3881.1
The release includes the following fixes:
Do not block when encountering a database compatibility level not understood. Instead, the latest Azure SQL
Database or on-premise platform will be assumed.
Added support for 'temporal retention policy' on SQL2017+ and Azure SQL Database.
Added /DiagnosticsFile:"C:\Temp\sqlpackage.log" command-line parameter to specify a file path to save
diagnostic information.
Added /Diagnostics command-line parameter to log diagnostic information to the console.
SqlPackage.exe is a command-line utility that automates the following database development tasks:
Extract: Creates a database snapshot (.dacpac) file from a live SQL Server or Azure SQL Database.
Publish: Incrementally updates a database schema to match the schema of a source .dacpac file. If the
database does not exist on the server, the publish operation creates it. Otherwise, an existing database is
updated.
Export: Exports a live database - including database schema and user data - from SQL Server or Azure SQL
Database to a BACPAC package (.bacpac file).
Import: Imports the schema and table data from a BACPAC package into a new user database in an
instance of SQL Server or Azure SQL Database.
DeployReport: Creates an XML report of the changes that would be made by a publish action.
DriftReport: Creates an XML report of the changes that have been made to a registered database since it
was last registered.
Script: Creates a Transact-SQL incremental update script that updates the schema of a target to match the
schema of a source.
The SqlPackage.exe command line allows you to specify these actions along with action-specific parameters and
properties.
Download the latest version. For details about the latest release, see the release notes.
Command-Line Syntax
SqlPackage.exe initiates the actions specified using the parameters, properties, and SQLCMD variables specified
on the command line.
SQLCMD Variables
The following table describes the format of the option that you can use to override the value of a SQL command
(sqlcmd) variable used during a publish action. The values of variable specified on the command line override
other values assigned to the variable (for example, in a publish profile).
PARAMETER DEFAULT DESCRIPTION
DriftReport Parameters
A SqlPackage.exe report action creates an XML report of the changes that have been made to the registered
database since it was last registered.
Help for DriftReport action
PARAMETER SHORT FORM VALUE DESCRIPTION