Sunteți pe pagina 1din 32

SQL Server DBA Phone Interview Questions

Although no two phone interviews are the same, below outlines some potential questions to keep
in mind as you prepare for a SQL Server DBA phone interview:

Can you explain your skill set?


o Employers look for the following:
DBA (Maintenance, Security, Upgrades, Performance Tuning, etc.)
Database developer (T-SQL, SSIS, Analysis Services, Reporting Services,
Crystal Reports, Service Broker, etc.)
Communication skills (oral and written)
o DBA's opportunity
This is your 30 second elevator pitch outlining your technical expertise
and how you can benefit the organization

Can you explain the environments you have worked in related to the following items:
o SQL Server versions
o SQL Server technologies
Relational engine, Reporting Services, Analysis Services, Integration
Services
o Number of SQL Servers
o Number of instances
o Number of databases
o Range of size of databases
o Number of DBAs
o Number of Developers
o Hardware specs (CPUs, memory, 64 bit, SANs)

What are the tasks that you perform on a daily basis and how have you automated them?
o For example, daily checks could include:
Check for failed processes
Research errors
Validate disk space is not low
Validate none of the databases are offline or corrupt
Perform database maintenance as available to do so
o For example, automation could include:
Setup custom scripts to query for particular issues and email the team
Write error messages centrally in the application and review that data
Setup Operators and Alerts on SQL Server Agent Jobs for automated job
notification

How do you re-architect a process?


o Review the current process to understand what is occurring
o Backup the current code for rollback purposes
o Determine what the business and technical problems are with the process
o Document the requirements for the new process

o
o
o
o
o

Research options to address the overall business and technology needs


For example, these could include:
Views
Synonyms
Service Broker
SSIS
Migrate to a new platform
Upgrade in place
Design and develop a new solution
Conduct testing (functional, load, regression, unit, etc.)
Run the systems in parallel
Sunset the existing system
Promote the new system

What is your experience with third party applications and why would you use them?
o Experience
Backup tools
Performance tools
Code or data synchronization
Disaster recovery\high availability
o Why
Need to improve upon the functionality that SQL Server offers natively
Save time, save money, better information or notification

How do you identify and correct a SQL Server performance issue?


o Identification - Use native tools like Profiler, Perfmon, system stored procedures,
dynamic management views, custom stored procedures or third party tools
o Analysis - Analyze the data to determine the core problems
o Testing - Test the various options to ensure they perform better and do not cause
worse performance in other portions of the application
o Knowledge sharing - Share your experience with the team to ensure they
understand the problem and solution, so the issue does not occur again

What are the dynamic management views and what value do they offer?
o The DMV's are a set of system views new to SQL Server 2005 and beyond to
gain insights into particular portions of the engine
o Here are some of the DMV's and the associated value:
sys.dm_exec_query_stats and sys.dm_exec_sql_text - Buffered code in
SQL Server
sys.dm_os_buffer_descriptors
sys.dm_tran_locks - Locking and blocking
sys.dm_os_wait_stats - Wait stats
sys.dm_exec_requests and sys.dm_exec_sessions - Percentage complete
for a process

What is the process to upgrade from DTS to SSIS packages?


o You can follow the steps of the migration wizard but you may need to manually
upgrade portions of the package that were not upgraded by the wizard
o For script related tasks, these should be upgraded to new native components or
VB.NET code
Problem
With as many Data Transformation Services (DTS) Packages that have been developed
and deployed for B2B, data integration and BI needs, when it comes to upgrading from
SQL Server 2000 to 2005, this portion of the upgrade will need special attention. DTS
Packages have become engrained in many applications and business processes making
them business critical for not only internal applications but also mission critical for
business partners. In addition, DTS Packages are probably being used in unexpected
ways than originally intended further complicating the upgrade process. In some respects
DTS Packages being called directly from web pages or being automatically triggered
based on a business event follow a much different paradigm than those DTS Packages
being called from a SQL Server Job as was probably how DTS Package execution was
originally intended. With the varying usage of the SQL Server 2000 DTS Packages, what
is the upgrade process to SQL Server 2005 Integration Services (SSIS) Packages?
Solution
The DTS Package (SQL Server 2000) to SSIS Package (SQL Server 2005) upgrade is
dependent on the Business Intelligence Development Studio which follows the Visual
Studio paradigm of solutions and projects. In the example below, we will create a single
solution and project with a single SSIS Package, although numerous SSIS Packages can
reside in a single solution. Follow these steps to migrate your DTS Packages to SSIS
Packages with the Migrate DTS 2000 Package Wizard:

I Steps
D
1

SQL Server
2000 DTS
Package The original
logic is to
delete data
from the
destination
table and
then import
all of the
data.

Screen Shot

Migrate DTS
2000
Package Start the
migration
wizard by
starting the
Business
Intelligence
Development
Studio,
creating an
Integration
Services
project and
navigating to
Project |
Migrate DTS
2000
Package.

Choose
Source
Location Specify the
SQL Server
2000 server
name with
the DTS
Package that
needs to be
migrated and
the
authenticatio
n type.
Press the
'Next >'
button to
continue the
process.

Choose
Destination
Location Directory on
your desktop
to save the
SSIS
Package file.
Press the
'Next >'
button to
continue the
process.

List Packages
- All DTS
Packages on
the SQL
Server 2000
instance that
can be
upgraded.
Press the
'Next >'
button to
continue the
process.

Specify a
Log File Press the
'Browse'
button to
specify a log
file for the
migration
process.
Press the
'Next >'
button to
continue the
process.

Complete the
Wizard Review the
configuration
s and then
press the
'Finish'
button to
begin the
migration
process.

Migrating the
packages... Review the
status for the
migration.

Integration
Services
Package Review the
objects to
validate the
code was
successfully
migrated.

For information on deploying SSIS Packages, reference

What are some of the features of SQL Server 2012 that you are looking into and why are
they of interest?
o AlwaysON
o Contained Databases
o User Defined Server Roles
o New date and time functions
o New FORMAT and CONCAT functions
o New IIF and CHOOSE functions
o New paging features with OFFSET and FETCH
o NOTE - Many more new features do exist, this is an abbreviated list.

Keep in mind that these questions are primarily related to the relational engine, so a BI DBA
would have a whole different set of questions. In addition, the more you know about the
organization and role should guide you down a path for the types of questions you should be
prepared for during the phone interview.
SQL Server Backup and Recovery

Question 1 - What are 2 options to validate whether or not a backup will restore
successfully?
o Restore the backup as a portion of a testing process or log shipping.
o Restore the backup with the Verify Only option.

Question 2 - How can you issue a full backup and not interrupt the LSN's?
o Issue a copy only backup.

SQL Server Performance Tuning

Question 1 - Name as many native SQL Server performance monitoring and tuning tools
that you know of and their associated value.
o System objects - System objects such as sp_who2, sp_lock, fn_get_sql, etc.
provide a simple means to capture basic metrics related to locking, blocking,
executing code, etc.
o Profiler - In a nutshell, Profiler provides the lowest common denominator of
activity on a SQL Server instance. Profiler captures per session code with the
ability to filter the data collection based on database, login, host name, application
name, etc. in order to assess the IO, CPU usage, time needed, etc.
o Perfmon\System Monitor - Perfmon\System Monitor is responsible for macro
level metrics related to processes and sub systems.
o Dynamic Management Views and Functions - New to SQL Server 2005 and
beyond, the Dynamic Management Views and Functions offer a real time view
into the SQL Server sub systems.
o TYPEPERF.EXE - TYPEPERF.EXE is a command line tool included with the
Windows operating system that writes performance data to the command window
or to a file. It is necessary to capture performance data whenever you are trying to
diagnose performance issues on a server. Performance data provides information

on the server's utilization of the processor, memory, and disk, as well as SQL
Server-specific performance data.
SQL Server Management Studio Built-in Performance Reports - As part of the
installation of SQL Server 2005 and beyond a number of performance-related
reports are installed. To get to these reports open the SQL Server Management
Studio (SSMS) and connect to a SQL Server instance. If you don't have an
instance of Reporting Services installed then the icon will be disabled.

Question 2 - How do you go about tuning a SQL Server query?


o Identify the query causing the issue.
o Review the query plan by issuing SHOWPLAN_TEXT, SHOWPLAN_ALL,
Graphical Query Plan or sys.dm_exec_query_stats.
o Review the individual query components to determine which components of the
query have the highest cost.
o Outline options to improve the query such as moving from cursor based logic to
set based logic or vice versa, changing the JOIN order, WHERE clause or
ORDER BY clause, adding indexes, removing indexes, creating covering indexes,
etc.
o Test the options to determine the associated performance improvement.
o Implement the solution.

SQL Server Maintenance

Question 1 - What are the three options in SQL Server 2005 and beyond to rebuild
indexes?
o CREATE INDEX with DROP_EXISTING
o DROP INDEX and CREATE INDEX
o ALTER INDEX

Question 2 - Name 3 or more DBCC commands and their associated purpose.


o DBCC CACHESTATS - Displays information about the objects currently in the
buffer cache.
o DBCC CHECKDB - This will check the allocation of all pages in the database as
well as check for any integrity issues.
o DBCC CHECKTABLE - This will check the allocation of all pages for a specific
table or index as well as check for any integrity issues.
o DBCC DBREINDEX - This command will reindex your table. If the indexname
is left out then all indexes are rebuilt. If the fillfactor is set to 0 then this will use
the original fillfactor when the table was created.
o DBCC PROCCACHE - This command will show you information about the
procedure cache and how much is being used.
o DBCC MEMORYSTATUS - Displays how the SQL Server buffer cache is
divided up, including buffer activity.
o DBCC SHOWCONTIG - This command gives you information about how much
space is used for a table and indexes. Information provided includes number of
pages used as well as how fragmented the data is in the database.

o
o
o

DBCC SHOW_STATISTICS - This will show how statistics are laid out for an
index. You can see how distributed the data is and whether the index is really a
good candidate or not.
DBCC SHRINKFILE - This will allow you to shrink one of the database files.
This is equivalent to doing a database shrink, but you can specify what file and
the size to shrink it to. Use the sp_helpdb command along with the database name
to see the actual file names used.
DBCC SQLPERF - This command will show you much of the transaction logs
are being used.
DBCC TRACEON - This command will turn on a trace flag to capture events in
the error log. Trace Flag 1204 captures Deadlock information.
DBCC TRACEOFF - This command turns off a trace flag.

SQL Server Database Design

Question 1 - What happens when you add a column in the middle of a table (dbo.Test1)
in SQL Server Management Studio?
o Management Studio creates a temporary table called dbo.Tmp_Test1 with the
new structure.
o If there is data in the original table dbo.Test1 this data is inserted into the new
temp table dbo.Tmp_Test1 (now you have two sets of the same data).
o The original table dbo.Test1 is dropped.
o The new table dbo.Tmp_Test1 is renamed to dbo.Test1.
o If the table has indexes all of the indexes are recreated.

Question 2 - What are included columns with respect to SQL Server indexing?
o A new type of index was developed in SQL Server 2005 and beyond that assists
in situations where a covering index is needed.
o Indexes with Included Columns are nonclustered indexes that have the following
benefits:
Columns defined in the include statement, called non-key columns, are not
counted in the number of columns by the database engine.
Columns that previously could not be used in queries, like nvarchar(max),
can be included as a non-key column.
A maximum of 1023 additional columns can be used as non-key columns.

SQL Server Business Intelligence

Question 1 - Name some new features from Data Transformation Services to SQL Server
Integration Services.
o SSIS checkpoints.
o SSIS logging.
o SSIS package configurations.
o SSIS Breakpoint.
o Dynamic flat file connections.
o SSIS batch processing.

MERGE JOIN.

Question 2 - How do you backup Analysis Services databases?


o Create the XML statement to backup the Analysis Services databases, then create
a SQL Server Agent Job to perform the task on a daily basis.

Question 1: What sorts of functionality does SQL Server Agent provide?


o SQL Server Agent is a Windows service that accompanies each instance of SQL
Server on a machine for most editions of SQL Server.
o SQL Server Agent is primarily a job scheduler for executing T-SQL, SSIS, DOS,
etc. scripts.
o SQL Server Agent is also responsible for defining Operators and Alerts.
Operators can be associated with Jobs or Alerts, so that particular people
(email addresses, pagers, NET SEND) are notified or distribution lists are
notified if an issue occurs.
Alerts can be setup for custom conditions or errors of a particular severity
level.

Question 2: Do all of the SQL Server 2005, 2008, 2008 R2 editions install the SQL
Server Agent service by default?
o No - The SQL Server Express Edition does not have a SQL Server Agent Service.

Question 3: If SQL Server Express does not have a job scheduling interface what
alternatives are available?
o Windows Task Scheduler.
o Third party solutions.

Question 4: True or False - Can a single Job have multiple Job Schedules?
o True.

Question 5: Which database stores the SQL Server Agent objects?


o MSDB

Question Difficulty = Moderate

Question 1: How many options are available to identify failed jobs?


o Manually review the failed Jobs in Management Studio.
o Setup an automated process to query the msdb.dbo.sysjobhistory system table to
find the failures.

Question 2: How many of the SQL Server Agent system tables can you name with their
associated purpose?
o sysjobactivity stores data about job activity
o sysjobhistory stores data for all historical executions of all jobs
o sysjobs stores data about each job such as the name
o sysjobschedules stores job schedule information

o
o
o

sysjobservers stores server information related to a job


sysjobsteps stores specific job step information such as the type of code being
issued, the actual code, etc.
sysjobstepslogs stores specific job step log information for each run if this is
enabled.

Question 3: How many of the SQL Server Agent system stored procedures can you name
with their associated purpose?
o sp_help_job
This stored procedure returns information about the job.
If no parameters are used information is returned for all jobs.
If a specific job_id is passed it gives you job information, job step
information, schedule information and last run information.
o sp_help_jobactivity
This stored procedure returns information about the status of the job run.
If no parameters are used information is returned for all jobs.
o sp_help_jobcount
This stored procedure gives you a count of how many jobs a schedule is
tied to.
This stored procedure requires either @schedule_id or @schedule_name
to be passed as a parameter.
o sp_help_jobhistory
This stored procedure returns all history information for all of the job
runs.
If no parameters are used information is returned for all jobs.
If you also use parameter @mode = N'FULL' this provides additional
information about each job step.
o sp_help_jobs_in_schedule
This stored procedure gives you a list of the jobs that are tied to a
schedule.
This requires either @schedule_id or @schedule_name to be passed as a
parameter.
o sp_help_jobschedule
This stored procedure provides jobs schedule information for a particular
job.
This stored procedure requires either @job_id or @job_name to be
passed.
o sp_help_jobserver
This stored procedure provides information about a specific server tied to
a job.
This stored procedure requires either @job_id or @job_name to be
passed.
o sp_help_jobstep
This stored procedure provides information about the job steps for a
specific job.

This stored procedure requires either @job_id or @job_name to be


passed.
sp_help_jobsteplog
This stored procedure returns information about a specific job step log.
This stored procedure requires either @job_id or @job_name to be
passed.
sp_get_composite_job_info
This stored procedure returns combined data for all jobs in the system.
If no parameters are used info is returned for all jobs.

Question 4: What resources are available to troubleshoot SQL Server Agent?


o SQL Server Agent Log is a record of all entries written by the SQL Server Agent
service.
o Performance Monitor and Profiler can be setup to monitor the status of a
particular job.

Question 5: True or False. Besides the MSDB database, SQL Server Agent also has
configuration parameter related data stored in the registry.
o True.

Question Difficulty = Advanced

Question 1: What is multi-server administration and when would you use it?
o Job management paradigm with a master server and one or more target servers.
The master server sends and receives jobs from the target servers with the entire
job and job step related information stored on the master server. When the jobs
complete on the target servers notification is sent to the master server so this
server has the updated information. Multi-server administration is used in a
enterprise environment where a consistent set of jobs need to run on numerous
SQL Servers, this technology helps to consolidate the creation, execution and
management of those jobs.

Question 2: What is a SQL Server Agent Proxy? Can you name some of the sub-systems
proxies? Why are the proxies of any significance?
o A SQL Server Agent Proxy is an account that is setup to help secure a particular
sub-system, so that if an login\user is trying to access the particular sub-system
and does not have rights, the proxy will grant rights.
o The SQL Server Agent Proxies include:
ActiveX Script
Operating System (CmdExec)
Replication Distributor
Replication Merge
Replication Queue Reader
Replication Snapshot
Replication Transaction-Log Reader
Analysis Services Command

Analysis Services Query


SSIS Package Execution
Unassigned Proxies
The SQL Server Agent Proxies offer a new level of granularity for SQL Server
Agent that was not previously available.

Question 3: What are the new SQL Server Agent Fixed Database Roles and what is the
significance of each role?
o SQLAgentUserRole - Ability to manage Jobs that they own
o SQLAgentReaderRole - All of the SQLAgentUserRole rights and the ability to
review multi-server jobs, their configurations and history
o SQLAgentOperatorRole - All of the SQLAgentReaderRole rights and the ability
to review operators, proxies and alerts, execute, stop or start all local jobs, delete
the job history for any local job as well as enable or disable all local jobs and
schedules

SQL Server Backup and Recovery

Question 1 - What are 2 options to validate whether or not a backup will restore
successfully?
o Restore the backup as a portion of a testing process or log shipping.
o Restore the backup with the Verify Only option.

Question 2 - How can you issue a full backup and not interrupt the LSN's?
o Issue a copy only backup.

SQL Server Performance Tuning

Question 1 - Name as many native SQL Server performance monitoring and tuning tools
that you know of and their associated value.
o System objects - System objects such as sp_who2, sp_lock, fn_get_sql, etc.
provide a simple means to capture basic metrics related to locking, blocking,
executing code, etc.
o Profiler - In a nutshell, Profiler provides the lowest common denominator of
activity on a SQL Server instance. Profiler captures per session code with the
ability to filter the data collection based on database, login, host name,
application name, etc. in order to assess the IO, CPU usage, time needed, etc.
o Perfmon\System Monitor - Perfmon\System Monitor is responsible for macro
level metrics related to processes and sub systems.
o Dynamic Management Views and Functions - New to SQL Server 2005 and
beyond, the Dynamic Management Views and Functions offer a real time view
into the SQL Server sub systems.
o TYPEPERF.EXE - TYPEPERF.EXE is a command line tool included with the
Windows operating system that writes performance data to the command
window or to a file. It is necessary to capture performance data whenever you are
trying to diagnose performance issues on a server. Performance data provides

information on the server's utilization of the processor, memory, and disk, as


well as SQL Server-specific performance data.
SQL Server Management Studio Built-in Performance Reports - As part of the
installation of SQL Server 2005 and beyond a number of performance-related
reports are installed. To get to these reports open the SQL Server Management
Studio (SSMS) and connect to a SQL Server instance. If you don't have an
instance of Reporting Services installed then the icon will be disabled.

Question 2 - How do you go about tuning a SQL Server query?


o Identify the query causing the issue.
o Review the query plan by issuing SHOWPLAN_TEXT, SHOWPLAN_ALL,
Graphical Query Plan or sys.dm_exec_query_stats.
o Review the individual query components to determine which components of the
query have the highest cost.
o Outline options to improve the query such as moving from cursor based logic to
set based logic or vice versa, changing the JOIN order, WHERE clause or
ORDER BY clause, adding indexes, removing indexes, creating covering indexes,
etc.
o Test the options to determine the associated performance improvement.
o Implement the solution.

SQL Server Maintenance

Question 1 - What are the three options in SQL Server 2005 and beyond to rebuild
indexes?
o CREATE INDEX with DROP_EXISTING
o DROP INDEX and CREATE INDEX
o ALTER INDEX

Question 2 - Name 3 or more DBCC commands and their associated purpose.


o DBCC CACHESTATS - Displays information about the objects currently in the
buffer cache.
o DBCC CHECKDB - This will check the allocation of all pages in the database as
well as check for any integrity issues.
o DBCC CHECKTABLE - This will check the allocation of all pages for a specific
table or index as well as check for any integrity issues.
o DBCC DBREINDEX - This command will reindex your table. If the indexname
is left out then all indexes are rebuilt. If the fillfactor is set to 0 then this will use
the original fillfactor when the table was created.
o DBCC PROCCACHE - This command will show you information about the
procedure cache and how much is being used.
o DBCC MEMORYSTATUS - Displays how the SQL Server buffer cache is
divided up, including buffer activity.
o DBCC SHOWCONTIG - This command gives you information about how much
space is used for a table and indexes. Information provided includes number of
pages used as well as how fragmented the data is in the database.

o
o
o

DBCC SHOW_STATISTICS - This will show how statistics are laid out for an
index. You can see how distributed the data is and whether the index is really a
good candidate or not.
DBCC SHRINKFILE - This will allow you to shrink one of the database files.
This is equivalent to doing a database shrink, but you can specify what file and
the size to shrink it to. Use the sp_helpdb command along with the database name
to see the actual file names used.
DBCC SQLPERF - This command will show you much of the transaction logs
are being used.
DBCC TRACEON - This command will turn on a trace flag to capture events in
the error log. Trace Flag 1204 captures Deadlock information.
DBCC TRACEOFF - This command turns off a trace flag.

SQL Server Database Design

Question 1 - What happens when you add a column in the middle of a table (dbo.Test1)
in SQL Server Management Studio?
o Management Studio creates a temporary table called dbo.Tmp_Test1 with the
new structure.
o If there is data in the original table dbo.Test1 this data is inserted into the new
temp table dbo.Tmp_Test1 (now you have two sets of the same data).
o The original table dbo.Test1 is dropped.
o The new table dbo.Tmp_Test1 is renamed to dbo.Test1.
o If the table has indexes all of the indexes are recreated.

Question 2 - What are included columns with respect to SQL Server indexing?
o A new type of index was developed in SQL Server 2005 and beyond that assists
in situations where a covering index is needed.
o Indexes with Included Columns are nonclustered indexes that have the following
benefits:
Columns defined in the include statement, called non-key columns, are not
counted in the number of columns by the database engine.
Columns that previously could not be used in queries, like nvarchar(max),
can be included as a non-key column.
A maximum of 1023 additional columns can be used as non-key columns.

SQL Server Business Intelligence

Question 1 - Name some new features from Data Transformation Services to SQL Server
Integration Services.
o SSIS checkpoints.
o SSIS logging.
o SSIS package configurations.
o SSIS Breakpoint.
o Dynamic flat file connections.
o SSIS batch processing.

MERGE JOIN.

Question 2 - How do you backup Analysis Services databases?


o Create the XML statement to backup the Analysis Services databases, then create
a SQL Server Agent Job to perform the task on a daily basis.

Question Difficulty = Easy

Question 1: What isolation levels will provide completely read-consistent views of a


database to all transactions?
o Answer (SQL Server 2000): Only the SERIALIZABLE isolation level will
provide a completely read-consistent view of a database to a given transaction. In
any of the other isolation levels, you could perceive some/all of the following,
depending on the isolation level running in:
Uncommitted dependency/dirty reads
Inconsistent Analysis/non-repeatable reads
Phantom reads (via insert/delete)
o Answer (SQL Server 2005): Both the SERIALIZABLE and SNAPSHOT
isolation levels will provide a completely read-consistent view of a database to a
given transaction. In any of the other isolation levels, you could perceive some/all
of the following, depending on the isolation level running in:
Uncommitted dependency/dirty reads
Inconsistent Analysis/non-repeatable reads
Phantom reads (via insert/delete)
Question 2: Within the READ_COMMITTED isolation level, during a read operation
how long are locks held/retained for?
o Answer: When SQL Server executes a statement at the read committed isolation
level, it acquires short lived share locks on a row by row basis. The duration of
these share locks is just long enough to read and process each row; the server
generally releases each lock before proceeding to the next row. Thus, if you run a
simple select statement under read committed and check for locks, you will
typically see at most a single row lock at a time. The sole purpose of these locks is
to ensure that the statement only reads and returns committed data. The locks
work because updates always acquire an exclusive lock which blocks any readers
trying to acquire a share lock.
Question 3: Within the REPEATABLE_READ and SERIALIZABLE isolation levels,
during a read operation and assuming row-level locking, how long are locks held/retained
for?
o Answer: Within either of these isolation levels, locks are held for the duration of
the transaction, unlike within the READ_COMMITTED isolation level as noted
above.
Question 4: Can locks ever be de-escalated?
o Answer: No, locks are only escalated, never de-escalated. See
http://msdn2.microsoft.com/en-us/library/ms184286.aspx.

Question Difficulty = Moderate

Question 1: What are the different types of lock modes in SQL Server 2000 and 2005?
o Answer:
Shared
Update
Exclusive
Schema (modification and stability)
Bulk Update
Intent (shared, update, exclusive)
Key Range (shared, insert, exclusive)
Question 2: Can you explain scenarios where each type of lock would be taken:
o Answer:
SHARED - Used for read operations that do not change or update data,
such as a SELECT statement.
UPDATE - Used on resources that can be updated. Prevents a common
form of deadlock that occurs when multiple sessions are reading, locking,
and potentially updating resources later. In a repeatable read or
serializable transaction, the transaction reads data, acquiring a shared (S)
lock on the resource (page or row), and then modifies the data, which
requires lock conversion to an exclusive (X) lock. If two transactions
acquire shared-mode locks on a resource and then attempt to update data
concurrently, one transaction attempts the lock conversion to an exclusive
(X) lock. The shared-mode-to-exclusive lock conversion must wait
because the exclusive lock for one transaction is not compatible with the
shared-mode lock of the other transaction; a lock wait occurs. The second
transaction attempts to acquire an exclusive (X) lock for its update.
Because both transactions are converting to exclusive (X) locks, and they
are each waiting for the other transaction to release its shared-mode lock, a
deadlock occurs. To avoid this potential deadlock problem, update (U)
locks are used. Only one transaction can obtain an update (U) lock to a
resource at a time. If a transaction modifies a resource, the update (U) lock
is converted to an exclusive (X) lock.
EXCLUSIVE - Used for data-modification operations, such as INSERT,
UPDATE, or DELETE. Ensures that multiple updates cannot be made to
the same resource at the same time.
INTENT - Used to establish a lock hierarchy. The types of intent locks
are: intent shared (IS), intent exclusive (IX), and shared with intent
exclusive (SIX). (Another question in the Difficult level section expands
on this)
SCHEMA - Used when an operation dependent on the schema of a table is
executing. The types of schema locks are: schema modification (Sch-M)
and schema stability (Sch-S).
BULK UPDATE - Used when bulk copying data into a table and the
TABLOCK hint is specified.
KEY RANGE - Protects the range of rows read by a query when using the
serializable transaction isolation level. Ensures that other transactions

cannot insert rows that would qualify for the queries of the serializable
transaction if the queries were run again.
Question 3: What is lock escalation and what triggers it?
o Answer: The process of converting many fine-grained locks into fewer coarsegrained locks.
Escalation reduces system resource consumption/overhead while
increasing the possibility of concurrency conflicts
To escalate locks, the Database Engine attempts to change the intent lock
on the table to the corresponding full lock, for example, changing an intent
exclusive (IX) lock to an exclusive (X) lock, or an intent shared (IS) lock
to a shared (S) lock). If the lock escalation attempt succeeds and the full
table lock is acquired, then all heap or B-tree, page (PAGE), key-range
(KEY), or row-level (RID) locks held by the transaction on the heap or
index are released. If the full lock cannot be acquired, no lock escalation
happens at that time and the Database Engine will continue to acquire row,
key, or page locks.
Lock escalation is triggered at either of these times:
When a single Transact-SQL statement acquires at least 5,000
locks on a single table or index.
When the number of locks in an instance of the Database Engine
exceeds memory or configuration thresholds.
If locks cannot be escalated because of lock conflicts, the Database
Engine periodically triggers lock escalation at every 1,250 new
locks acquired.
Question 4: Name as many of the lockable resources as possible in SQL Server 2005?
o Answer:
RID (single row on a heap)
KEY (single row (or range) on an index)
PAGE
EXTENT
HOBT (heap or b-tree)
TABLE (entire table, all data and indexes)
FILE
APPLICATION
METADATA
ALLOCATION_UNIT
DATABASE
Question 5: What requirements must be met for a BULK-UPDATE lock to be granted,
and what benefit do they serve?
o Answer: The Database Engine uses bulk update (BU) locks when bulk copying
data into a table, and either the TABLOCK hint is specified or the table lock on
bulk load table option is set using sp_tableoption. Bulk update (BU) locks allow
multiple threads to bulk load data concurrently into the same table while
preventing other processes that are not bulk loading data from accessing the table.
Question 6: What is the least restrictive type of lock? What is the most restrictive?

Answer: The least restrictive type of lock is a shared lock. The most restrictive
type of lock is a schema-modification
Question 7: What is a deadlock and how is it different from a standard block situation?
o Answer: A deadlock occurs when two or more tasks permanently block each
other by each task having a lock on a resource which the other tasks are trying to
lock. In a deadlock situation, both transactions in the deadlock will wait forever
unless the deadlock is broken by an external process in a standard blocking
scenario, the blocked task will simply wait until the blocking task releases the
conflicting lock scenario.
Question 8: Which 2 isolation levels support optimistic/row-versioned-based
concurrency control?
o Answer: First is the the READ COMMITTED isolation level. This is the only
level that supports both a pessimistic (locking-based) and optimistic (versionbased) concurrency control model. Second is SNAPSHOT isolation level that
supports only an optimistic concurrency control model.
Question 9: What database options must be set to allow the use of optimistic models?
o Answer: READ_COMMITTED_SNAPSHOT option for the read committed
optimistic model. ALLOW_SNAPSHOT_ISOLATION option for the snapshot
isolation level
Question 10: What is the size of a lock structure?
o Answer: 96 bytes
o

Question Difficulty = Difficult

Question 1: In what circumstances will you see key-range locks, and what are they
meant to protect against?
o Answer: You will only see key-range locks when operating in the
SERIALIZABLE isolation level.
o Key-range locks protect a range of rows implicitly included in a record set being
read by a Transact-SQL statement. The serializable isolation level requires that
any query executed during a transaction must obtain the same set of rows every
time it is executed during the transaction. A key range lock protects this
requirement by preventing other transactions from inserting new rows whose keys
would fall in the range of keys read by the serializable transaction.
o Key-range locking prevents phantom reads. By protecting the ranges of keys
between rows, it also prevents phantom insertions into a set of records accessed
by a transaction.
Question 2: Explain the purpose of INTENT locks?
o Answer: The Database Engine uses intent locks to protect placing a shared (S)
lock or exclusive (X) lock on a resource lower in the lock hierarchy. Intent locks
are named intent locks because they are acquired before a lock at the lower level,
and therefore signal intent to place locks at a lower level. Intent locks serve two
purposes:
To prevent other transactions from modifying the higher-level resource in
a way that would invalidate the lock at the lower level.

To improve the efficiency of the Database Engine in detecting lock


conflicts at the higher level of granularity.
Question 3: Can deadlocks occur on resources other than database object?
o Answer: YES.
Question 4: What are the different types of resources that can deadlock?
o Answer: Deadlock is a condition that can occur on any system with multiple
threads, not just on a relational database management system, and can occur for
resources other than locks on database objects. Here are the resources:
Locks - Waiting to acquire locks on resources, such as objects, pages,
rows, metadata, and applications can cause deadlock.
Worker threads - A queued task waiting for an available worker thread can
cause deadlock. If the queued task owns resources that are blocking all
worker threads, a deadlock will result
Memory - When concurrent requests are waiting for memory grants that
cannot be satisfied with the available memory, a deadlock can occur.
Parallel query execution-related resources - Coordinator, producer, or
consumer threads associated with an exchange port may block each other
causing a deadlock usually when including at least one other process that
is not a part of the parallel query. Also, when a parallel query starts
execution, SQL Server determines the degree of parallelism, or the
number of worker threads, based upon the current workload. If the system
workload unexpectedly changes, for example, where new queries start
running on the server or the system runs out of worker threads, then a
deadlock could occur.
Multiple Active Result Sets (MARS) resources - Resources used to control
interleaving of multiple active requests under MARS, including:
User resource - when a thread is waiting for a resource that is
potentially controlled by a user application, the resource is
considered to be an external or user resource and is treated like a
lock
Session mutex - The tasks running in one session are interleaved,
meaning that only one task can run under the session at a given
time. Before the task can run, it must have exclusive access to the
session mutex.
Transaction mutex - All tasks running in one transaction are
interleaved, meaning that only one task can run under the
transaction at a given time. Before the task can run, it must have
exclusive access to the transaction mutex.
Question 5: Explain how the database engine manages the memory footprint for the lock
pool when running in a dynamic lock management mode.
o Answer (SQL Server 2000): When the server is started with locks set to 0, the
lock manager allocates two percent of the memory allocated to SQL Server to an
initial pool of lock structures. As the pool of locks is exhausted, additional locks
are allocated. The dynamic lock pool does not allocate more than 40 percent of
the memory allocated to SQL Server.

Generally, if more memory is required for locks than is available in


current memory, and more server memory is available (the max server
memory threshold has not been reached), SQL Server allocates memory
dynamically to satisfy the request for locks. However, if allocating that
memory would cause paging at the operating system level (for example, if
another application was running on the same computer as an instance of
SQL Server and using that memory), more lock space is not allocated.
o Answer (SQL Server 2005): When running in dynamic management mode (i.e.
if the the server is started with locks configuration option set to 0), the lock
manager acquires sufficient memory from the Database Engine for an initial pool
of 2,500 lock structures. As the lock pool is exhausted, additional memory is
acquired for the pool.
Generally, if more memory is required for the lock pool than is available
in the Database Engine memory pool, and more computer memory is
available (the max server memory threshold has not been reached), the
Database Engine allocates memory dynamically to satisfy the request for
locks. However, if allocating that memory would cause paging at the
operating system level (for example, if another application is running on
the same computer as an instance of SQL Server and using that memory),
more lock space is not allocated. The dynamic lock pool does not acquire
more than 60 percent of the memory allocated to the Database Engine.
After the lock pool has reached 60 percent of the memory acquired by an
instance of the Database Engine, or no more memory is available on the
computer, further requests for locks generate an error.
Question 6: Describe the differences between the pessimistic SERIALIZABLE model
and the optimistic SNAPSHOT model in terms of transactional isolation (i.e., not the
concurrency differences, but instead how the exact same transactional modifications may
result in different final outcomes).
o Answer:
o It is typically relatively simple to understand SERIALIZABLE. For the outcome
of two transactions to be considered SERIALIZABLE, it must be possible to
achieve this outcome by running one transaction at a time in some order.
o Snapshot does not guarantee this level of transactional isolation.
o Imagine the following sample scenario:
There is a bag containing a mixture of white and black marbles. Suppose
that we want to run two transactions. One transaction turns each of the
white marbles into black marbles. The second transaction turns each of the
black marbles into white marbles. If we run these transactions under
SERIALIZABLE isolation, we must run them one at a time. The first
transaction will leave a bag with marbles of only one color. After that, the
second transaction will change all of these marbles to the other color.
There are only two possible outcomes: a bag with only white marbles or a
bag with only black marbles.
If we run these transactions under snapshot isolation, there is a third
outcome that is not possible under SERIALIZABLE isolation. Each
transaction can simultaneously take a snapshot of the bag of marbles as it

exists before we make any changes. Now one transaction finds the white
marbles and turns them into black marbles. At the same time, the other
transactions finds the black marbles - but only those marbles that where
black when we took the snapshot - not those marbles that the first
transaction changed to black - and turns them into white marbles. In the
end, we still have a mixed bag of marbles with some white and some
black. In fact, we have precisely switched each marble.
Question Difficulty = Easy

Question 1: Consider a scenario where you issue a full backup. Then issue some
transaction log backups, next a differential backup, followed by more transaction log
backups, then another differential and finally some transaction log backups. If the SQL
Server crashes and if all the differential backups are bad, when is the latest point in time
you can successfully restore the database? Can you recover the database to the current
point in time without using any of the differential backups?
o Answer: You can recover to the current point in time, as long as you have all the
transaction log backups available and they are all valid. Differential backups do
not affect the transaction log backup chain.
Question 2: Assume the same scenario, however instead of issuing differential backups,
all three of the differential backups were full backups. Assume all the full backups are
corrupt with the exception of the first full backup. Can you recover the database to the
current point in time in this scenario?
o Answer: Yes, just as it is with question 1. Full backups do not affect
the transaction log backup chain. As long as you have all of the transaction log
backups and they are valid, you can restore the first full backup and then all
subsequent transaction log backups to bring the database current.
Question 3: What methods are available for removing fragmentation of any kind on an
index in SQL Server?
o Answer (SQL Server 2000):
DBCC INDEXDEFRAG
DBCC DBREINDEX
CREATE INDEXDROP EXISTING (cluster)
DROP INDEX; CREATE INDEX
o Answer (SQL Server 2005): The same processes as SQL Server 2000, only
different syntax
ALTER INDEX...REORGANIZE
ALTER INDEX...REBUILD
CREATE INDEX...DROP EXISTING (cluster)
DROP INDEX; CREATE INDEX

Question Difficulty = Moderate

Question 1: What is the fundamental unit of storage in SQL Server data files and what is
its size?
o Answer: A page with a size of 8k.

Question 2: What is the fundamental unit of storage in SQL Server log files and what is
its size?
o Answer: A log record, size is variable depending on the work being performed.
Question 3: How many different types of pages exist in SQL Server?
o Answer:
Data
Index
Text/Image (LOB, ROW_OVERFLOW, XML)
GAM (Global Allocation Map)
SGAM (Shared Global Allocation Map)
PFS (Page Free Space)
IAM (Index Allocation Map)
BCM (Bulk Change Map)
DCM (Differential Change Map)

Question Difficulty = Difficult

Question 1: What are the primary differences between an index reorganization and an
index rebuild?
o Answer:
A reorganization is an "online" operation by default; a rebuild is an
"offline" operation by default
A reorganization only affects the leaf level of an index
A reorganization swaps data pages in-place by using only the pages
already allocated to the index; a rebuild uses new pages/allocations
A reorganization is always a fully-logged operation; a rebuild can be a
minimally-logged operation
A reorganization can be stopped mid-process and all completed work is
retained; a rebuild is transactional and must be completed in entirety to
keep changes
Question 2: Can you explain the differences between a fully-logged and minimallylogged operations?
o Answer: In a fully logged bulk operation, depending on the type of operation
being performed, SQL Server will log either each record as it is processed (when
performing a bulk data-load for example), or an image of the entire page that was
changed (when performing a re-index/create index for example). In a minimallylogged operation, SQL Server will log space allocations only, and also flip bit
values in the BCM pages, assuming they are not already flipped, for extents that
are modified during the bulk operation.
o This minimizes both the space required to bulk log operations during the
execution of the
operation, and also the time required to complete the bulk
operation, since very little data is logged and updated compared to a fully-logged
scenario.
o When a database is bulk-changeable (i.e. in the bulk-logged recovery model), the
BCM pages are reset when the first "BACKUP LOG" operation occurs following
the given bulk operation. During this transaction log backup, the extents that are

marked as modified in the BCM pages are included, in their entirety within the
transaction log backup. This results in a much larger transaction log backup than
would be expected for the size of the active transaction log. This is what allows
you to recover a bulk-logged operation if you have the transaction log backup
following the operation despite the fact that during the operation you are logging
only space allocations.
Question Difficulty = Easy

Question 1: Name five different tools which can be used for performance tuning and
their associated purpose.
o Performance Monitor\System Monitor - Tool to capture macro level performance
metrics.
o Profiler - Tool to capture micro level performance metrics based on the statements
issued by a login, against a database or from host name.
o Server Side Trace - System objects to write the detailed statement metrics to a
table or file, similar to Profiler.
o Dynamic Management Views and Functions - SQL Server objects with low level
metrics to provide insight into a specific portion of SQL Server i.e. the database
engine, query plans, Service Broker, etc.
o Management Studio's Built-In Performance Reports - Ability to capture point in
time metrics as pre-defined by Microsoft.
o Custom scripts - Custom scripts can be developed to monitor performance,
determine IO usage, monitor fragmentation, etc. all in an effort to improve
performance.
o Third party applications - Performance monitoring and tuning applications from
vendors in the SQL Server community.

Question 2: Explain how the hardware running SQL Server can help or hinder
performance.
o Taxed CPUs will queue requests and hinder query performance.
o Insufficient memory could cause paging resulting in slow query performance.
o Incorrectly configured disk drives could exacerbate IO problems.

Question 3: Why is it important to avoid functions in the WHERE clause?


o Because SQL Server will scan the index or the table as opposed to seeking the
data. The scan operation is a much more costly operation than a seek.
o Often a slightly different approach can be used to prevent using the function in the
WHERE clause yielding a favorable query plan and high performance.

Question 4: How is it possible to capture the IO and time statistics for your queries?
o Use the SET STATISTICS IO and SET STATISTICS TIME settings in your
queries or enable the settings in your Management Studio session.

Question 5: True or False - It is possible to correlate the Performance Monitor metrics


with Profiler data in a single SQL Server native product?

True - This functionality is possible with SQL Server Profiler.

Question Difficulty = Moderate

Question 1: How can I/O statistics be gathered and reviewed for individual database
files?
o By using the fn_virtualfilestats function to capture the metrics.
o This process can be automated with a script to determine the file usage with
numerous samples.

Question 2: What is a query plan and what is the value from a performance tuning
perspective?
o A query plan is the physical break down of the code being passed to the SQL
Server optimizer.
o The value from a performance tuning perspective is that each component of the
query can be understood and the percentage of resource utilization can be
determined at a micro level. As query tuning is being conducted, the detailed
metrics can be reviewed to compare the individual coding techniques to determine
the best alternative.

Question 3: True or False - It is always beneficial to configure TempDB with an equal


number of fixed sized files as the number of CPU cores.
o False - With always being the operative word in the question.
o Depending on the version of SQL Server, the disk subsystem, load, queries, etc., a
1 to 1 ratio of files to cores may be necessary on high end SQL Servers with
intense processing.
o If you do not have that luxury, a starting point may to be having half the number
of tempdb files as compared to CPU cores.
o This is a configuration to load test and monitor closely depending on the type of
processing, load, hardware, etc. that your SQL Server is expected to support.

Question 4: Explain the NOLOCK optimizer hint and some pros\cons of using the hint.
o The NOLOCK query hint allows SQL Server to ignore the normal locks that are
placed and held for a transaction allowing the query to complete without having
to wait for the first transaction to finish and therefore release the locks.
o This is one short term fix to help prevent locking, blocking or deadlocks.
o However, when the NOLOCK hint is used, dirty data is read which can
compromise the results returned to the user.

Question 5: Explain three different approaches to capture a query plan.


o SHOWPLAN_TEXT
o SHOWPLAN_ALL
o Graphical Query Plan
o sys.dm_exec_query_optimizer_info
o sys.dm_exec_query_plan
o sys.dm_exec_query_stats

Question Difficulty = Advanced

Question 1: True or False - A LEFT OUTER JOIN is always faster than a NOT EXISTS
statement.
o False - With always being the operative word. Depending on the situation the
OUTER JOIN may or may not be faster than a NOT EXISTS statement. It is
necessary to test the techniques, review the query plans and tune the queries
accordingly.

Question 2: Name three different options to capture the input (code) for a query in SQL
Server.
o DBCC INPUTBUFFER
o fn_get_sql
o sys.dm_exec_sql_text

Question 3: Explain why the NORECOMPUTE option of UPDATE STATISTICS is


used.
o This command is used on a per table basis to prevent the table from having
statistics automatically updated based on the 'Auto Update Statistics' database
configuration.
o Taking this step will prevent UPDATE STATISTICS from running during an
unexpected time of the day and cause performance problems.
o By setting this configuration it is necessary to manually UPDATE STATISTICS
on a regular basis.

Question 4: Explain a SQL Server deadlock, how a deadlock can be identified, how it is
a performance problem and some techniques to correct deadlocks.
o A deadlock is a situation where 2 spids have data locked and cannot release their
lock until the opposing spid releases their lock. Depending on the severity of the
deadlock, meaning the amount of data that is locked and the number of spids that
are trying to access the same data, an entire chain of spids can have locks and
cause a number of deadlocks, resulting in a performance issue.
o Deadlocks can be identified by Profiler in either textual, graphical or XML
format.
o Deadlocks are a performance problem because they can prevent 2 or more
processes from being able to process data. A deadlock chain can occur and
impact hundreds of spids based on the data access patterns, number of users,
object dependencies, etc.
o Deadlocks could require a database design change, T-SQL coding change to
access the objects in the same order, separating reporting and OLTP applications,
including NOLOCK statements in SELECT queries that can accept dirty data, etc.

Question 5: Please explain why SQL Server does not select the same query plan every
time for the same code (with different parameters) and how SQL Server can be forced to
use a specific query plan.
o The query plan is chosen based on the parameters and code being issued to the
SQL Server optimizer. Unfortunately, a slightly different query plan can cause
the query to execute much longer and use more resources than another query with
exactly the same code and only parameter differences.
o The OPTIMIZE FOR hint can be used to specify what parameter value we want
SQL Server to use when creating the execution plan. This is a SQL Server 2005
and beyond hint.

EASY
1) What is SQL Server replication?

Replication is subset of SQL Server that can move data and database objects in an
automated way from one database to another database. This allows users to work with the
same data at different locations and changes that are made are transferred to keep the
databases synchronized.

2) What are the different types of SQL Server replication?

Snapshot replication - As the name implies snapshot replication takes a snapshot of the
published objects and applies it to a subscriber. Snapshot replication completely
overwrites the data at the subscriber each time a snapshot is applied. It is best suited for
fairly static data or if it's acceptable to have data out of sync between replication
intervals. A subscriber does not always need to be connected, so data marked for
replication can be applied the next time the subscriber is connected. An example use of
snapshot replication is to update a list of items that only changes periodically.

Transactional replication As the name implies, it replicates each transaction for the
article being published. To set up transactional replication, a snapshot of the publisher or
a backup is taken and applied to the subscriber to synchronize the data. After that, when a
transaction is written to the transaction log, the Log Reader Agent reads it from the
transaction log and writes it to the distribution database and then to the subscriber. Only
committed transactions are replicated to ensure data consistency. Transactional
replication is widely applied where high latency is not allowed, such as an OLTP system
for a bank or a stock trading firm, because you always need real-time updates of cash or
stocks.

Merge replication This is the most complex types of replication which allows changes
to happen at both the publisher and subscriber. As the name implies, changes are merged
to keep data consistency and a uniform set of data. Just like transactional replication, an
initial synchronization is done by applying snapshot. When a transaction occurs at the
Publisher or Subscriber, the change is written to change tracking tables. The Merge
Agent checks these tracking tables and sends the transaction to the distribution database
where it gets propagated. The merge agent has the capability of resolving conflicts that

occur during data synchronization. An example of using merge replication can be a store
with many branches where products may be centrally stored in inventory. As the overall
inventory is reduced it is propagated to the other stores to keep the databases
synchronized.
3) What is the difference between Push and Pull Subscription?

Push - As the name implies, a push subscription pushes data from publisher to the
subscriber. Changes can be pushed to subscribers on demand, continuously, or on a
scheduled basis.

Pull - As the name implies, a pull subscription requests changes from the Publisher. This
allows the subscriber to pull data as needed. This is useful for disconnected machines
such as notebook computers that are not always connected and when they connect they
can pull the data.

4) What are different replication agents and whats their purpose?

Snapshot Agent- The Snapshot Agent is used with all types of replication. It prepares the
schema and the initial bulk copy files of published tables and other objects, stores the
snapshot files, and records information about synchronization in the distribution database.
The Snapshot Agent runs at the Distributor.

Log Reader Agent - The Log Reader Agent is used with transactional replication. It
moves transactions marked for replication from the transaction log on the Publisher to the
distribution database. Each database
published using transactional replication has its own Log Reader Agent that runs on the
Distributor and connects to the Publisher (the Distributor can be on the same computer as
the Publisher)

Distribution Agent - The Distribution Agent is used with snapshot replication and
transactional replication. It applies the initial snapshot to the Subscriber and moves
transactions held in the distribution database to Subscribers. The Distribution Agent runs
at either the Distributor for push subscriptions or at the Subscriber for pull subscriptions.

Merge Agent - The Merge Agent is used with merge replication. It applies the initial
snapshot to the Subscriber and moves and reconciles incremental data changes that occur.
Each merge subscription has its own Merge Agent that connects to both the Publisher and
the Subscriber and updates both. The Merge Agent runs at either the Distributor for push
subscriptions or the Subscriber for pull subscriptions.

Queue Reader Agent - The Queue Reader Agent is used with transactional replication
with the queued updating option. The agent runs at the Distributor and moves changes
made at the Subscriber back to the
Publisher. Unlike the Distribution Agent and the Merge Agent, only one instance of the
Queue Reader Agent exists to service all Publishers and publications for a given
distribution database.

5) Does a specific recovery model need to be used for a replicated database?

Replication is not dependent on any particular recovery model. A database can participate
in replication whether it is in simple, bulk-logged, or full. However how data is tracked
for replication depends on the type of replication used.

Medium
1) What type of locking occurs during the Snapshot generation?

Locking depends on the type of replication used:


o In snapshot replication, the snapshot agent locks the object during the entire
snapshot generation process.
o In transactional replication, locks are acquired initially for a very brief time and
then released. Normal operations on a database can continue after that.
o In merge replication, no locks are acquired during the snapshot generation
process.

2) What options are there to delete rows on the publisher and not on the subscriber?

One option is to replicate stored procedure execution instead of the actual DELETE
command. You can create two different versions of the stored procedures one on the
publisher that does the delete and the other on the subscriber that does not do the delete.
Another option is to not replicate DELETE commands.

3) Is it possible to run multiple publications and different type of publications from the same
distribution database?

Yes this can be done and there are no restrictions on the number or types of publications
that can use the same distribution database. One thing to note though is that all
publications from a Publisher must use the same Distributor and distribution database.

4) Data is not being delivered to Subscribers, what can be the possible reasons?

There are a number of possible causes for data not being delivered to Subscribers:
o The table is filtered, and there are no changes to deliver to a given Subscriber.
o One or more agents are not running or are failing with an error.
o Data is deleted by a trigger, or a trigger includes a ROLLBACK statement.
o A transactional subscription was initialized without a snapshot, and changes have
occurred on the Publisher since the publication was created.
o Replication of stored procedure execution for a transactional publication produces
different results at the Subscriber.
o The INSERT stored procedure used by a transactional article includes a condition
that is not met.
o Data is deleted by a user, a replication script, or another application.

5) Explain what stored procedure sp_replcounters is used for?

Sp_replcounters is a system stored procedure that returns information about the


transaction rate, latency, and first and last log sequence number (LSN) for each
publication on a server. This is run on the publishing server. Running this stored
procedure on a server that is acting as the distributor or subscribing to publications from
another server will not return any data

Hard
1) How will you monitor replication latency in transactional replication?

Tracer tokens were introduced with SQL Server 2005 transactional replication as a way
to monitor the latency of delivering transactions from the publisher to the distributor and
from the distributor to the subscriber(s).

2) If I create a publication with one table as an article, and then change the schema of the
published table (for example, by adding a column to the table), will the new schema ever be
applied at the Subscribers?

Yes. Schema changes to tables must be made by using Transact-SQL or SQL Server
Management Objects (SMO). When schema changes are made in SQL Server
Management Studio, Management Studio attempts to drop and re-create the table and
since you cannot drop published objects, the schema change will fail.

3) Is it possible to replicate data from SQL Server to Oracle?

Yes this can be done using heterogeneous replication. In SQL Server 2000, publishing
data to other databases such as DB2 or Oracle was supported; however, publishing data
from other databases was not supported without custom programming. In SQL Server
2005 and later versions, Oracle databases can be directly replicated to SQL Server in
much the same way as standard SQL Server replication.

4) How will you monitor replication activity and performance? What privilege do you need to
use replication monitor?

The easiest way to monitor replication activity and performance is to use replication
monitor, but you can also use the below tools to monitor replication performance:
o T-SQL commands.
o Microsoft SQL Server Management studio.
To monitor replication, a user must be a member of the sysadmin fixed server role at the
Distributor or a member of the replmonitor fixed database role in the distribution
database. A system administrator can add any user to the replmonitor role, which allows
that user to view replication activity in Replication Monitor; however, the user cannot
administer replication.

5) Can you tell me some of the common replication DMVs and their use?

sys.dm_repl_articles - Contains information about each article being published. It


returns data from the database being published and returns a row for each object being
published in each article.
sys.dm_repl_schemas Contains information about each table and column being
published. It returns data from the database being published and returns one row for each
column in each object being published
sys.dm_repl_traninfo - Contains information about each transaction in a transactional
replication

S-ar putea să vă placă și