Documente Academic
Documente Profesional
Documente Cultură
sa_role, sso_role and oper_role are system roles. They are on by default.
try to login
On production boxes we have 2 cpus on each box, on UAT we have 2 cpus and on Dev server 4
cpus.
We were handling the tickets received through emails (any production issues).
6.If production server went down what all the steps u will follow?
First I will intimate to all the application mangers and they will send an alert message to all the
users regarding the down time.
Then I will look into the errorlog and take relevant action based on the error message, If I
couldnt solve the issue, I will intimate to my DBA manager further log the case with Sybase as
priority P1 (System down).
First check the network transfer rate using ping -t network port, might be the network problem,
will contact the network people, make sure that tempdb size is good enough to perform the user
connections, mostly tempdb size should be 25% of all the users database size. Make sure that
we run the update statistics and recompile the stored procedures sp_recompile on regular basis,
also check the database fragment level, if necessary defrag exercise, run the
sp_sysmon , sp_monitor and analyze from the output like cpu utilization etc.,
9. What all the precautions you will take to avoid the same type of problem?
We never had an issue, I will document the thing with steps taken to resolve the
issue.
10. If the time comes such that you had to take Important decision, but your reporting
manager is not there, so how you will decide?
I will approach my project managers boss, will explain the situation and seek the permission
from him, if hes not available then I will take the call, and will keep all the application managers
in the loop.
ps eaf
Yes, we can, say for example we create the SPs to check the fragment level etc., etc.,
13. What u need to do is issue an ASE kill command on the connection then un-suspend the
db?
select lct_admin(unsuspend,db_id(db_name))
14. What command helps you to know the process running on this port, but only su can
run this command?
15. For synchronizing the logins from lower version to higher version, just take the 11.9.2
syslogins structure, go to 12.5 higher version server?
create the table named as logins in the tempdb will this structure, run bcp in into this login table,
next use master to run the following commands, insert into syslogins select *,null,null from
tempdb..logins
16. How to delete UNIX files which are more than 3 days old?
You must be in the parent directory of snapshots and execute the below command
17. How to find the time taken for rollback of the processed?
(i)truncate_only: It is used to truncate the log gracefully. It checkpoints the database before the
truncating the Database. Truncate only removes the inactive part of the log without
making a backup copy. Use on databases without log segments on a separate device from data
segments. Dont specify a dump device or backup server name. NOTE:Use dump transaction
with no_log as a last resort and use it only after dump transaction truncate_only fails.
(ii) no_log: Use no log when your transaction log is completely full and no_log doesnt
checkpoint the database before the dumping the log,no log removes the inactive part of the log
without making a backup copy, and without recording the procedure in the transaction log. Use
no log only when you have totally run out of the log space and cant run usual dump transaction
command. Use no _log as last resort and use it only after dump transaction with truncate _only
fails.
When the log is on the same segment as the data. Dump transaction with truncate only to
truncate the log.
Youre not concerned with the recovery of recent transactions ( for example, in an early
development environment). Dump transaction with truncate_only to truncate the log your usual
method of dumping the transaction log (either the standard dump transaction command or dump
transaction with truncate only) fails because of insufficient log space. Dump transaction with
no_log to truncate the log without recording the event.
Note: dump database immediately afterward to copy the entire database, including the log.
NOTE: You should always use truncate_only. There are times when there is
absolutely no space left in the tran log, and you will have to use the
no_log option which truncates the tran log but does not write into the
transaction log. A dump tran with truncate_only does write into the tran
log.
Normalization is the process of breaking your data into separate components to reduce the
repetition of data. Normalization can be up to 5 level. Each level of normalization reduce the
repetition of data. it can be first/Second/Third and BCNF
Actually Normalization is the process of organizing data to minimize redundancy.
Normalization usually involves dividing a database into two or more tables and defining
relationships between the tables. The objective is to isolate data so that additions, deletions, and
modifications of a field can be made in just one table and then propagated through the rest of the
database via the defined relationships.
Basically you have to normalize your database upto 3 levels
1st normal form
2nd normal form
3rd normal form
There are certain rules for database normalization; each rule is called a normal form. If the first
rule is observed then we can say the database is in first normal form, If first three rules are
observed then we can say the database is considered to be Third Normal formal. There are other
rules too like 4th Normal form, 5Th normal form.
Every column must be atomic. It cannot be decomposed into two or more subcolumns.
Each row and column position can have only one value.
For a table to be in second normal form, every non-key field must depend on the entire primary
key, not on part of a composite primary key. If a database has only single-field primary keys, it
is automatically in Second normal form.
For Table to be in Third normal form, a non-key field cannot depend on another non-key field.
21. What are the precautions taken to reduce the down time?
22. What are the isolation levels?, list different isolation levels in Sybase & what is default
To avoid the manual overriding of locking, we have transaction isolation level which is tied with
transaction.
Isolation level 0-This allows reading pages that currently are being modified. It allows dirty
read
Isolation level 1 this allows read operation can only read pages. No dirty reads are allowed.
Isolation level 2-Allows a single page to be read many times within same transaction and
guarantees that same value is read each time. This prevent other users to read
Isolation level 3 preventing another transaction from updating, deleting or inserting rows for
pages previously read within transaction
The optdiag utility displays statistics from the systabstats and systatistics
tables. optdiag can also be used to update systatistics information. Only a sa can run
the optdiag (A command line tool for reading, writing and simulating table, index, and column
statistics).
Advantages of optdiag
optdiag can display statistics for all the tables in a database, or for a single table
optdiag output contains addition information useful for understanding query costs, such as index
height and the average row length.
optdiag is frequently used for other tuning tasks, so you should have these reports on hand
Disadvantages of optdiag
It produces a lot of output, so if you need only a single piece of information, such as the number
of pages in the table, othermethods are faster and have lower systems overhead.
NOTE:What are the default character set and sort order after installation of Sybase ASE
15
The default character set is cp850, which supports the English language, upper case and lower
case, and any special accent characters that are used in European languages.
The default sort order that goes with the character set is binary, which is the fastest of sorts
when building index structures or during execution of order by clauses.
24. How frequently you defrag the database?
sp_cacheconfig
Bourne shell is a basic shell which is bundled with all UNIX file systems. Where as
Korn shell is superset of Bourne shell. It has got more added features like alias in
the longest name and longest file name. It has got history command which can
using syslogshold
Index is a separate storage segment created for the table. There are two types of
Typically, a clustered index will be created on the primary key of a table, and non-clustered
indexes are used where needed.
Non-clustered indexes
Clustered index
Note: With a lock datapages or lock datarows clustered indexes are sorted physically only
upon creation. After that, the indexes behave like non-clustered index.
The database consistency checker (dbcc) provides commands for checking the logical
Checkingpage linkage and data pointers at both page level and row level
usingcheckstorageor checktable and checkdb.
dbcc page(dbid,pageno)
Is splitting the large tables into smaller, with alter table (table name) partion#
When ASE is idle; it raises the checkpoint that automatically flushes the dirty reads
40. What are the steps you take if your server process gets slow down?
tempdb
check when it run last update statistics, if it is not I will update the statistics followed by
sp_recompile.
41. How do you check the Sybase server running from UNIX box?
First I see that important system tables are taken dumps are clean.
using m
who r
It is a utility, used to copy the definitions of all objects of a database. From a database to an
operating system file or from an operating system file to database. Invoke the defncopy program
directly from the operating system. defncopy provides a non-interactive way of copying out
definitions (create statements) for views, rules, defaults, triggers, or procedures from a database
to an operating system file.
It is a utility to copy the data from a table to flat file and vice versa
Fast bcp & Slow bcp are two modes. bcp in works in one of two modes.
Slow bcp logs each row insert that it makes, used for tables that have one or more
indexes or triggers.
Fast bcp logs only page allocation, copying data into tables without indexes or
To determine the bcp mode that is best for your copying task, consider the
Fast bcp might enhance performance; however, slow bcp gives you greater data recoverability.
Defrag is deleting the indexes & recreating the indexes. Sothat the gap space will be
filled.
Give me the Global variable names for the description given below
1. Error number reported for last SQL statement ( @@error)
2. Current transaction mode (chained or unchained)(@@tranchained)
3. Status of previous fetch statement in a cursor(@@sqlstatus)
4. Transaction nesting level(@@trancount)
5. Current process ID(@@spid)
What is the difference between static and dynamic configuration parameter in Sybase?
In Sybase ASE,when a dynamic configuration parameter is modified the effect takes place
immediately.When a static parameter is modified,the server must be rebooted for the effect to
take place.
NOTE:What does the command sp_helpconfig number of user connections 100 return?
It returns the amount of memory that will be taken by the Sybase ASE server if the parameter is
set to that value.
Candidate key: A primary key or unique constraint column. A table can have multiple candidate
keys.
Alternate key: Alternate key is a key which is declared as a second key in composite key.
Composite key: An index key that includes two or more columns; for example
authors(au_lname,au_fname)
Candidate Key A Candidate Key can be any column or a combination of columns that can
qualify as unique key in database. There can be multiple Candidate Keys in one table. Each
Candidate Key can qualify as Primary Key.
Composite key -A primary key that consistsof two or more attributes is known as
Compositekey
Alternate Key- Any of the candidate keys that is not part of the primary key
is called an
alternate key.
60. Whats the different between a primary key and unique key?
Both primary key and unique enforce uniqueness of the column on which they are
define. But by default, primary key creates a clustered index on the column, where are
primary key doesnt allow NULLs, but unique key allows one NULL only.
A natural key is a key for a given table that uniquely identifies the row.
v) wider columns
It is nothing but assigning indexes to a table, so that query optimizer will prepare a query
plan for a table & update the values in a table. With this performance increases.
A concurrency control mechanism that protects the integrity of data and transaction
prevent two users from attempting to change the same data at the same time, and to
prevent processes that are selecting data from reading data that is in the process of
being changed.
* page locks
* table locks
* demand locks
Page Locks
* shared
* exclusive
* update
shared
These locks are requested and used by readers of information. More than one connection can
hold a shared lock on a data page.
exclusive
The SQL Server uses exclusive locks when data is to be modified. Only one connection may
have an exclusive lock on a given data page. If a table is large enough and the data is spread
sufficiently, more than one connection may update different data pages of a given table
simultaneously.
update
A update lock is placed during a delete or an update while the SQL Server is hunting for the
pages to be altered. While an update lock is in place, there can be shared locks thus allowing for
higher throughput.
The update lock(s) are promoted to exclusive locks once the SQL Server is ready to perform the
delete/update.
Table Locks
* intent
* shared
* exclusive
intent
Intent locks indicate the intention to acquire a shared or exclusive lock on a data page. Intent
locks are used to prevent other transactions from acquiring shared or exclusive locks on the
given page.
shared
This is similar to a page level shared lock but it affects the entire table. This lock is typically
applied during the creation of a non-clustered index.
exclusive
This is similar to a page level exclusive lock but it affects the entire table. If an update or delete
affects the entire table, an exclusive table lock is generated. Also, during the creation of a
clustered index an exclusive lock is generated.
Demand Locks
A demand lock prevents further shared locks from being set. The SQL Server sets a demand lock
to indicate that a transaction is next to lock a table or a page.
This avoids indefinite postponement if there was a flurry of readers when a writer wished to
make a change
A dead lock occurs when two or more user processes each have a lock on a separate page or table
and each wants to acquire a lock on other processs page or table. The transaction with the least
accumulated CPU time is killed and all of its work is rolled back.
The housekeeper is a task that becomes active when no other tasks are active. It writes dirty
pages to disk, reclaims lost space, flushes statistics to systabstats and checks license usage.
and other internal processes. There is a limit for work tables to 14. System will create
specified columns, for all columns in an index or for all columns in a table.
Usage: ASE keeps statistics about the distribution of the key values in each index, and uses these
statistics in its decisions about which indexes to use in query processing.
Causes each stored procedure and trigger that uses the named table to be recompiles the next
time it runs.
Usage: The queries used by stored procedure and triggers are optimized only once, when they
are compiled. As you add indexes or make other changes to your database that affect its
statistics, your compiled stored procedures and triggers may lose efficiency. By recompiling the
stored procedures and triggers that act on a table, you can optimize the queries for maximum
efficiency.
A device is, well, a device: storage media that holds images of logical pages. A device will have
a row in the sysdevices table.
A fragment is a part of a device, indicating a range of virtual page numbers that have been
assigned to hold the images of a range of logical page numbers belonging to one particular
database. A fragment is represented by a row in sysusages.
A segment is a label that can be attached to fragments. Objects can be associated with a
particular segment (technically, each indid in sysindexes can be associated with a different
segment). When future space is needed for the object, it will only be allocated from the free
space on fragments that are labeled with that segment.
There can be up to 32 segments in a database, and each fragment can be associated with any, all,
or none of them (warnings are raised if there are no segments associated). Sysusages has a
column called segmap which is a bitmapped index of which segments are associated, this maps
to the syssegments table.
What is a segment?
A segment is a label that points to one or more database devices. Segment names are used
in create table and create index commands to place tables or indexes on specific database
devices. Using segments can improve Adaptive Server performance and give the System
Administrator or Database Owner increased control over the placement, size, and space usage of
database objects.
You create segments within a database to describe the database devices that are allocated to the
database. Each Adaptive Server database can contain up to 32 segments, including the system-
defined segments Before assigning segment names, you must initialize the database devices
with disk init and then make them available to the database with create database or alter
database.
Contents
Most SQL Server processing is logged in the transaction log table, syslogs. Each database,
including the system databases master, model, sybsystemprocs, and tempdb, has its own
transaction log. As modifications to a database are logged, the transaction log continues to grow
until it is truncated, either by a dump transaction command or automatically if the trunc log on
chkpt option is turned on as described below. This option is not recommended in most
production environments where transaction logs are needed for media failure recovery, because it
does not save the information contained in the log.
The transaction log on SQL Server is a write-ahead log. After a transaction is committed, the log
records for that transaction are guaranteed to have been written to disk. Changes to data pages
may have been made in data cache but may not yet be reflected on disk.
WARNING!
This guarantee cannot be made when UNIX files are used as SYBASE devices.
When you issue a commit transaction, the transaction log pages are immediately written to disk
to ensure recoverability of the transaction. The modified data pages in cache might not be written
to disk until a checkpoint is issued by a user or SQL Server or periodically as the data cache
buffer is needed by other SQL Server users. Note that pages modified in data cache can be
written to disk prior to the transaction committing, but not before the corresponding log records
have been written to disk. This happens if buffers in data cache containing dirty pages are needed
to load in a new page.
If the trunc log on chkpt option is set for a database, SQL Server truncates the transaction log for
the database up to the page containing the oldest outstanding transaction when it issues a
checkpoint in that database. A transaction is considered outstanding if it has not yet been
committed or rolled back. A checkpoint command issued by a user does not cause truncation of
the transaction log, even when the trunc log on chkpt option is set. Only implicit checkpoints
performed automatically by SQL Server result in this truncation. These automatic checkpoints
are performed using the internal SQL Server process called the checkpoint process.
The checkpoint process wakes up about every 60 seconds and cycles through every database to
determine if it needs to perform a checkpoint. This determination is based on the recovery
interval configuration parameter and the number of rows added to the log since the last
checkpoint. Only those rows associated with committed transactions are considered in this
calculation.
If the trunc log on chkpt option is set, the checkpoint process attempts to truncate the log every
sixty seconds, regardless of the recovery interval or the number of log records. If nothing will be
gained from this truncation, it is not done.
Transaction Logs and the recovery interval
The recovery interval is a configuration parameter that defines the amount of time for the
recovery of a single database. If the activity in the database is such that recovery would take
longer than the recovery interval, the SQL Server checkpoint process issues a checkpoint.
Because the checkpoint process only examines a particular database every 60 seconds, enough
logged activity can occur during this interval that the actual recovery time required exceeds the
time specified in the recovery interval parameter.
Note that the transaction log of the tempdb database is automatically truncated during every
cycle of the checkpoint process, or about every 60 seconds. This occurs whether the trunc log on
chkpt option is set on tempdb or not.
Transaction logging performed by SQL Server cannot be turned off, to ensure the recoverability
of all transactions performed on SQL Server. Any SQL statement or set of statements that
modifies data is a transaction and is logged. You can, however, limit the amount of logging
performed for some specific operations, such as bulk copying data into a database using bulk
copy (bcp) in the fast mode, performing a select/into query, or truncating the log. See the Tools
and Connectivity Troubleshooting Guide and the SQL Server Reference Manual for more
information on bcp. These minimally logged operations cause the transaction log to get out of
sync with the data in a database, which makes the transaction log useless for media recovery.
Once a non-logged operation has been performed, the transaction log cannot be dumped to a
device, but it can still be truncated. You must do a dump database to create a new point of
synchronization between the database and the transaction log to allow the log to be dumped to
device.
* A data delete record, including all the data in the original row.
* A data insert record, including all the data in the modified row.
There is no hard and fast rule dictating how big a transaction log should be. For new databases, a
log size of about 20 percent of the overall database size is a good starting point. The actual size
required depends on how the database is being used; for example:
* Whether or not the transaction log is being saved for media recovery purposes
Because there are many factors involved in transaction logging, you usually cannot accurately
determine in advance how much log space a particular database requires. The best way to
estimate this size is to simulate the production environment as closely as possible in a test. This
includes running the applications with the same number of users as will be using the database in
production.
Always store transaction logs on a separate database device and segment from the actual data. If
the data and log are on the same segment, you cannot save transaction log dumps. Up-to-date
recovery after a media failure is therefore not possible. If the device is mirrored, however, you
may be able to recover from a hardware failure. Refer to the System Administration Guide for
more information.
Also, the data and log segments must be on separate segments so that you can determine the
amount of log space used. dbcc checktable on syslogs only reports the amount of log space used
and what percentage of the log is full if the log is on its own segment.
Finally, because the transaction log is appended each time the database is modified, it is accessed
frequently. You can increase performance for logged operations by placing the log and data
segments on different physical devices, such as different disks and controllers. This divides the
I/O requests for a database between two devices.
The transaction log must be truncated periodically to prevent it from filling up. You can do this
either by enabling the trunc log on chkpt option or by regularly executing the dump transaction
command.
WARNING!
Up-to-the-minute recoverability is not guaranteed on systems when the trunc log on chkpt option
is used. If you use this on production systems and a problem occurs, you will only be able to
recover up to your last database dump.
Because the trunc log on chkpt option causes the equivalent of the dump transaction with
truncate_only command to be executed, it truncates the log without saving it to a device. Use this
option only on databases for which transaction log dumps are not being saved to recover from a
media failure, usually only development systems.
Even if this option is enabled, you might have to execute explicit dump transaction commands to
prevent the log from filling during peak loads.
If you are in a production environment and using dump transaction to truncate the log, space the
commands so that no process ever receives an 1105 (out of log space) error.
When you execute a dump transaction, transactions completed prior to the oldest outstanding
transaction are truncated from the log, unless they are on the same log page as the last
outstanding transaction. All transactions since the earliest outstanding transaction are considered
active, even if they have completed, and are not truncated.
This figure shows that all transactions after an outstanding transaction are considered active.
Note that the page numbers do not necessarily increase over time.
Because the dump transaction command only truncates the inactive portion of the log, you
should not allow stranded transactions to exist for a long time. For example, suppose a user
issues a begin transaction command and never commits the transaction. Nothing logged after the
begin transaction can be purged out of the log until one of the following occurs:
* The user issuing the transaction completes it.
* The user process issuing the command is forcibly stopped, and the transaction is rolled back.
Stranded transactions are usually due to application problems but can also occur as a result of
operating system or SQL Server errors. See, Managing Large Transactions, below, for more
information.
In SQL Server release 11.0 and later, you can query the syslogshold system table to determine
the oldest active transaction in each database. syslogshold resides in the master database, and
each row in the table represents either:
A database may have no rows in syslogshold, a row representing one of the above, or two rows
representing both of the above. For information about how Replication Sever truncation points
affects the truncation of a databases transaction log, see your Replication Server documentation.
Querying syslogshold can help you when the transaction log becomes too full, even with
frequent log dumps. The dump transaction command truncates the log by removing all pages
from the beginning of the log up to the page that precedes the page containing an uncommitted
transaction record (the oldest active transaction). The longer this active transaction remains
uncommitted, the less space is available in the transaction log, since dump transaction cannot
truncate additional pages.
For information about how to query syslogshold to determine the oldest active transaction that is
holding up your transaction dumps, see Backing Up and Restoring User Databases in the System
Administration Guide.
Because of the amount of data SQL Server logs, it is important to manage large transactions
efficiently. Four common transaction types can result in extensive logging:
* Mass updates
* Deleting a table
* Bulk copying in
The following sections contain explanations of how to use these transactions so that they do not
cause extensive logging.
Mass Updates
The following SQL statement updates every row in the large_tab table. All of these individual
updates are part of the same single transaction.
2> go
On a large table, this query results in extensive logging, often filling up the transaction log before
completing. In this case, an 1105 error (transaction log full) results. The portion of the
transaction that was processed is rolled back, which can also require significant server resources.
Another disadvantage of unnecessarily large transactions is the number or type of locks held. An
exclusive table lock is normally acquired for a mass update, which prevents all other users from
modifying the table during the update. This may cause deadlocks.
You can sometimes avoid this situation by breaking up large transactions into several smaller
ones and executing a dump transaction between the different parts. For example, the single
update statement above could be broken into two or more pieces as follows:
3> go
3> go
3> go
3> go
This example assumes that about half the rows in the table meet the condition col2 < x and the
remaining rows meet the condition col2 >= x.
If transaction logs are saved for media failure recovery, the log should be dumped to a device and
the with truncate_only option should not be used. Once you execute a dump transaction with
truncate_only, you must dump the database before you can dump the transaction log to a device.
Delete Table
The following SQL statement deletes the contents of the large_tab table within a single
transaction and logs the complete before-image of every row in the transaction log:
2> go
If this transaction fails before completing, SQL Server can roll back the transaction and leave the
table as it was before the delete. Usually, however, you do not need to provide for the recovery of
a delete table operation. If the operation fails halfway through, you can simply repeat it and the
result is the same. Therefore, the logging done by an unqualified delete table statement may not
always be needed.
You can use the truncate table command to accomplish the same thing without the extensive
logging:
2> go
This command also deletes the contents of the table, but it logs only space deallocation
operations, not the complete before- image of every row.
large_tab
2> go
Each insert operation is logged, and the records remain in the transaction log until the entire
statement has completed. Also, any locks required to process the inserts remain in place until the
transaction is committed or rolled back. This type of operation may fill the transaction log or
result in deadlock problems if other queries are attempting to access new_tab. Again, you can
often solve the problem by breaking up the statement into several statements that accomplish the
same logical task. For example:
<= y
3> go
3> go
>y
3> go
3> go
Note
This approach assumes that y represents a median value for col1. It also assumes that null values
are not allowed in col1. The inserts run significantly faster if a clustered index exists on
large_tab.col1, although it is not required.
If transaction logs are saved for media failure recovery, the log should be dumped to a device and
the with truncate_only option should not be used. Once you execute a dump transaction with
truncate_only, you must dump the database before you can dump the transaction log to a device.
Bulk Copy
You can break up large transactions when using bcp to bulk copy data into a database. If you use
bcp without specifying a batch size, the entire operation is performed as a single logical
transaction. Even if another user process does a dump transaction command, the log records
associated with the bulk copy operation remain in the log until the entire operation completes
and another dump transaction command is performed. This is one of the most common causes of
the 1105 error. You can avoid it by breaking up the bulk copy operation into batches. Use this
procedure to ensure recoverability:
2> go
3> go
2> go
1> checkpoint
2> go
Note
2. Specify the batch size on the bcp command line. This example copies rows into the
pubs2.authors table in batches of 100:
3. Turn off the trunc log on chkpt option when the bcp operations are complete, and dump the
database.
In this example, a batch size of 100 rows is specified, resulting in one transaction per 100 rows
copied. You may also need to break the bcp input file into two or more separate files and execute
a dump transaction between the copying of each file to prevent the transaction log from filling
up.
If the bcp in operation is performed in the fast mode (with no indexes or triggers), the operation
is not logged. In other words, only the space allocations are logged, not the complete table. The
transaction log cannot be dumped to a device in this case until after a database dump is
performed (for recoverability).
If your log is too small to accommodate the amount of data being copied in, you may want to do
batching and have the sp_dboption trunc log on checkpoint set. This will truncate the log after
each checkpoint.
Sybase Tempdb space management and addressing tempdb log full issues
A default installation of Sybase ASE has a small tempdb located on the master device. Almost all
ASE implementations need a much larger temporary database to handle sorts and worktables and
therefore DBAs need to increase tempdb. This document gives some recommendations how this
can be done and describes various techniques to guarantee maximum availability of tempdb.
Contents
* 1 About Segments
About Segments
Tempdb is basically just another database within the server and has three segments (Whats a
segment): system for system tables like sysobjects and syscolumns, default to store objects
such as tables and logsegment for the transaction log (syslogs table). This type of segmentation,
no matter the size of the database, has an undefined space for the transaction log; the only
limitation is the available size within the database. The following script illustrates that this can
lead to nasty problems.
go
declare @a int
select @a = 1
while @a > 0
begin
end
go
Running the script populates table #a and the transaction log at the same time, until tempdb is
full. Then the log gets automatically truncated by ASE, allowing for more rows to be inserted in
the table until tempdb is full again. This cycle repeats itself a number of times until tempdb is
filled up to the point that even the transaction log cannot be truncated anymore. At that point the
ASE errorlog will show messages like 1 task(s) are sleeping waiting for space to become
available in the log segment for database tempdb. When you log on to ASE to resolve this
problem and you run an sp_who, you will get Failed to allocate disk space for a work table in
database tempdb. You may be able to free up space by using the DUMP TRANsaction
command, or you may want to extend the size of the database by using the ALTER DATABASE
command.
Your first task is to kill off the process that causes the problem, but how can you know which
process to kill if you even cant run an sp_who? This problem can be solved with the lct_admin
function. In the format lct_admin(abort,0,) you can kill sessions that are waiting on a log
suspend. So you do a:
When you execute the lct_admin function the session is killed but tempdb is still full. In fact its
so full that the table #a cannot be dropped because this action must also be logged in the
transaction log of tempdb. Besides a reboot of the server you would have no other option than to
increase tempdb (alter database)with just a bit more space for the logsegment.
This extends tempdb and makes it possible to drop table #a and to truncate the transaction log. In
a real-life situation this scenario could cause significant problems for users.
One of the database options that can be set with the sp_dboption stored procedure can be used to
prevent this. When you do:
sp_dboption tempdb,abort tran on log full,true
(for pre 12.5.1: followed by a checkpoint in tempdb) the transaction that fills up the transaction
log in tempdb is automatically aborted by the server.
[edit]
The default or system segments in tempdb, where the actual data is stored, can also get full, just
like any ordinary database. Your query is cancelled with a Msg 1105: Cant allocate space for
object #a_____00000180017895422 in database tempdb because default segment is full/has
no free extents. If you ran out of space in syslogs, dump the transaction log. Otherwise, use
ALTER DATABASE or sp_extendsegment to increase size of the segment. This message can be
caused by a query that creates a large table in tempdb, or an internal worktable created by ASE
used for sorts, etc. Potentially, this problem is much worse than a full transaction log since the
transaction is cancelled. A full log segment leads to sleeping processes until the problem is
resolved. However, a full data segment leads to aborted transactions.
The Resource Governor in ASE allows you to deal with these circumstances. You can specify
just how much space a session is allowed to consume within tempdb. When the space usage
exceeds the specified limit the session is given a warning or is killed. Before using this feature
you must configure ASE (with sp_configure)to use the Resource Governor:
After a reboot of the server (12.5.1. too) you can use limits: (sp_add_resource_limit)
sp_add_resource_limit petersap,null,at all times,tempdb_space,200
This limit means that the user petersap is allowed to use 200 pages within tempdb. When the
limit is exceeded the session receives an error message (Msg 11056) and the query is aborted.
Different options for sp_add_resource_limit make it possible to kill the session when the limit is
exceeded. Just how much pages a user should be allowed to use in tempdb depends on your
environment. Things like the size of tempdb, the number of concurrent users and the type of
queries should be taken into account when setting the resource limit. When a resource limit for
tempdb is crossed it is logged into the Sybase errorlog. This makes it possible to trace how often
a limit is exceeded and by who. With this information the resource limit can be tuned. When you
use multiple temporary databases the limit is enforced on all of these.
For performance reasons it makes sense to separate the system+default and the logsegment from
each other. Not all sites follow this policy. Its a tradeoff between flexibility to have data and log
combined and some increased performance. Since tempdb is a heavily used database its not a bad
idea to invest some time into an investigation of the space requirements. The following example
illustrates how tempdb could be configured with separate devices for the logsegment and the
data. The example is based on an initial setting of tempdb on the master device. First we increase
tempdb for the system and data segments:
When you have done this and run an sp_helpdb tempdb you will see that data and log are still
on the same segment. Submit the following to resolve this: (sp_logdevice)
sp_logdevice tempdb,
Please note that tempdb should not be increased on the master device.
The dsync option for devices allows you to enable/disable I/O buffering to file systems. The
option is not available for raw partitions and NT files. To get the maximum possible performance
for tempdb use dedicated device files, created with the Sybase disk init command. The files
should be placed on file system, not on raw partitions. Set the dsync option off as in the
following example: (disk init)
size= 500M,
physname= /var/sybase/tempdb_data.dat,
dsync = false
When you have increased tempdb on separate devices you can configure tempdb so that the
master device is unused. This increases the performance of tempdb even further. There are
various techniques for this, all with their pros and cons but I recommend the following. Modify
sysusages so that segmap will be set to 0 for the master device. In other words, change the
segments of tempdb so that the master device is unused. This can be done with the following
statements:
sp_configure allow updates to system tables,1
go
update master..sysusages
set segmap = 0
where dbid = 2
and lstart = 0
go
go
go
When you use this configuration you should know the recovery procedure just in case one of the
devices of tempdb gets corrupted or lost. Start your ASE in single user mode by adding the -m
switch to the dataserver options. Then submit the following statements:
update master..sysusages
set segmap = 7
where dbid = 2
and lstart = 0
go
delete master..sysusages
where dbid = 2
go
Remove the -m switch from the dataserver options and restart ASE. Your tempdb is now
available with the default allocation on the master device.
* Move tempdb off the master device by modifying the segmap attribute
You dont *have* to create threshold action procedures for any segment, but you *can* define
thresholds on any segment. The log segment has a default last chance threshold set up that
will call a procedure called sp_thresholdaction. It is a good idea to define sp_thresholdaction,
but you dont have to if you dont you will just get a proc not found error when the log fills
up and will have to take care of it manually.
Thresholds are created only on segments, not on devices or databases. You can create
them in sysprocedures with a name starting like sp_ to have multiple databases share
the same procedure, but often each database has its own requirements so they are
created locally instead.
Use dbcc checktable(syslogs) for an accurate check of free space in Sybase Adaptive Server
Enterprise.
Contents
When you need to check free space in the server logs, users typically use the stored procedure
sp_helpdb. While sp_helpdb is useful for a general estimation of free space, for a precise figure
use one of the following methods:
* Determine the number of data pages in the transaction log via isql script, for example:
go
Sybase recommends sp_helpdb for most situations because it reports quickly. sp_helpdb uses the
unreserved page count in sysusages. However, unreserved page count is updated intermittently
and therefore may not accurately reflect the actual state of the database. Thus, when sp_helpdb
reports free space, when you perform an insert you may run out of space, resulting in error
message 1105, which reads in part:
Cant allocate space for object because log segment full
If this error occurs, follow the instructions in Runtime 1105 Errors: State 3 in the Error Message
Writeups chapter of the Adaptive Server Enterprise Troubleshooting and Error Messages Guide.
The dbcc checktable (syslogs) command also checks for possible corruption as well as the size of
the log. However, it can take a long time to run, depending on the size of the log. For more
information about dbcc checktable, see the chapter, Checking Database Consistency in the
Adaptive Server Enterprise System Administration Guide.
The isql script is more accurate than sp_helpdb. It is described in the Error 1105 section in Error
Message Writeups chapter of the Adaptive Server Enterprise Troubleshooting and Error
Messages Guide.
A large number of forwarded rows causes extra I/O during read operations.
Inserts and serializable reads are slow because they encounter pages with noncontiguous free
space that needs to be reclaimed.
Large I/O operations are slow because of low cluster ratios for data and index pages.
In my opinion, these are (in order of importance): (i) ensure a proper database / log dump
schedule for all databases (including master); (ii) run dbcc checkstorage on all databases
regularly (at lease weekly), and follow up any corruption problems found; (iii) run update
[index] statistics at least weekly on all user tables; (iv) monitor the server errorlog for messages
indicating problems (daily). Of course, a DBA has many other things to do as well, such as
supporting users & developers, monitor performance, etc.,
77. What is bit datatype and whats the information that can be stored inside a bit column?
bit datatype is used to store Boolean information like 1 or 0 (true or false). Until SQL Server 6.5
bit datatype could hold either a 1 or 0 and there was no support for NULL. But from SQL Server
7.0 onwards, bit datatype can represent a third state, which is NULL.
Trigger is an event. That gets fires when an event occurs, such as Insert, Delete, Update. There
are 3 types of triggers available with Sybase.
Triggers are automatic. They work no matter what caused the data modificationa clerks data
entry or an application action. A trigger is specific to one or more of the data modification
operations (update, insert, and delete), and is executed once for each SQL statement.
For example, to prevent users from removing any publishing companies from the publishers
table, you could use this trigger:
create trigger del_pub
on publishers
for delete
as
begin
rollback transaction
end
The next time someone tries to remove a row from the publishers table, the del_pub trigger
cancels the deletion, rolls back the transaction, and prints a message.
A trigger fires only after the data modification statement has completed and Adaptive Server
has checked for any datatype, rule, or integrity constraint violation. The trigger and the statement
that fires it are treated as a single transaction that can be rolled back from within the trigger. If
Adaptive Server detects a severe error, the entire transaction is rolled back.
* Triggers can cascade changes through related tables in the database. For example, a
delete trigger on the title_id column of the titles table can delete matching rows in other tables,
using the title_id column as a unique key to locating rows in titleauthor and roysched.
* Triggers can disallow, or roll back, changes that would violate referential integrity,
canceling the attempted data modification transaction. Such a trigger might go into effect when
you try to insert a foreign key that does not match its primary key. For example, you could create
an insert trigger on titleauthor that rolled back an insert if the new titleauthor.title_id value did
not have a matching value in titles.title_id.
* Triggers can enforce restrictions that are much more complex than those that are defined
with rules. Unlike rules, triggers can reference columns or database objects. For example, a
trigger can roll back updates that attempt to increase a books price by more than 1 percent of the
advance.
* Triggers can perform simple what if analyses. For example, a trigger can compare the
state of a table before and after a data modification and take action based on that comparison.
Triggers in Sybase
Trigger is a special type of SP that gets executed automatically when any DML operation takes
place on a table.
* Triggers can be used to apply complex restrictions than that enforced using rules.
* Trigger can perform analysis before and after changes to the table.
5. update statistics
6. reconfigure
7. disk init, disk mirror, disk refit, disk reinit, disk remirror, disk unmirror
8. select into
on emp
as
Trigger Example
on emp
for delete
as
delete payment
79. How many triggers will be fired if more than one row is inserted?
The numbers of rows you are inserting into a table, that many number of times trigger gets fire.
By creating appropriate indexes on tables. Writing a query based on the index and how to pick
up the appropriate index.
82. How do you optimize a select statement?
Using the SARGs in the where clause, checking the query plan using the set show plan on. If
the query is not considering the proper index, then will have to force the correct index to run the
query faster.
Constraints enable the RDBMS enforce the integrity of the database automatically, without
needing you to create triggers, rule or defaults.
Types of constraints: NOT NULL, CHECK, UNIQUE, PRIMARY KEY, FOREIGN KEY
85. What are the steps you will take to improve performance of a poor performing query?
This is very open ended question and there could be a lot of reasons behind the poor performance
of a query. But some general issues that you could talk about would be: No indexes, table scans,
missing or out of date statistics, blocking, excess recompilations of stored procedures,
procedures and triggers without SET NOCOUNT ON, poorly written query with unnecessarily
complicated joins, too much normalization, excess usage of cursors and temporary tables.
Some of tools /ways that help you trouble shooting performance problems are :
SET SHOWPLANON
86. What would you do when the ASE servers performance is bad?
Bad performance is not a very meaningful term, so youll need to get a more objective
diagnosis first. Find out (i) what such a complaint is based on (clearly increasing response time
or just a feeling that its slower?). (ii) for which applications / queries / users this seems to be
happening, and (iii) whether it happens continuously or just incidentally. Without identifying the
specific, reproducible problem, any action is no better than speculation.
Wrong: a segment can never get full (even though some error messages state something to that
extent). A segment is a label for one or more database device fragments; the fragments to
which that label has been mapped can get full, but the segments themselves cannot. (Well, Ok,
this is a bit of trick question when those device fragments full up, you either add more space,
or clean up old / redundant data.)
88. Is it a good idea to use data rows locking for all tables by default?
Not by default, only if youre having concurrency (locking) problems on a table, and youre not
locking many rows of a table in a single transaction, then you could consider datarows locking
for that table. In all other cases, use either data pages or all pages locking.
(data pages locking as the default lock scheme for all tables because switching to datarows
locking is fast and easy, whereas for all pages locking, the entire table has to be converted which
may take long for large tables. Also, datapages locking has other advantages over all pages, such
as not locking index pages, update statistics running at level 0, and the availability of the reorg
command)
89. Is there any advantage in using 64-bit version of ASE instead of the 32-bit version?
The only difference is that the 64-bit version of ASE can handle a larger data cache than the 32-
bit version, so youd optimize on physical I/O. Therefore, this may be an advantage if the
amount of data cache is currently a bottleneck. Theres no pint in using 64-bit ASE with the
same amount of total memory as for the 32-bit version, because 64-bit ASE comes with an
additional overhead in memory usage so that net amount of data cache would actually be less
for 64-bit than 32-bit in this case.
90. What is difference between managing permissions through users and groups or through
user-defined roles?
The main difference is that user-defined roles (introduced in ASE 11.5) are server-wide and are
grated to logins. Users and groups (the classic method that has always been there since the first
version of Sybase) are limited to a single database. Permission can be grated / revoked to both
user-defined roles and users / groups. Whichever method you choose, dont mix m, as the
precedence rules are complicated.
91. How do you BCP only a certain set of rows out of a large table?
If youre in ASE 11.5 or later, create a view for those rows and BCP out from the view. In earlier
ASE versions, youll have to select those rows into a separate table first and BCP out from that
table. In both cases, the speed of copying the data depends on whether there is a suitable index
for retrieving the rows.
92. What are the main advantages and disadvantages of using identity columns?
The main advantage of an identity column is that it can generate unique, sequential numbers very
efficiently, requiring only a minimal amount of I/O. The disadvantage is that the generated
values themselves are not transactional, and that the identity values may jump enormously when
the server is shutdown the rough way (resulting in identity gaps). You should therefore only
use identity columns in applications if youve addressed these issues (go here for more
information about identity gaps).
93. Is there any disadvantage of splitting up your application data into a number of
different databases?
When there are relations between tables / objects across the different databases, then there is a
disadvantage indeed: if you would restore a dump of one of the databases, those relations may
not be consistent anymore. This means that you should always back up a consistent set of
databases is the unit of backup / restore. Therefore, when making this kind of design decision,
backup/restore issues should be considered (and the DBA should be consulted).
select Server Start Time = crdate from master..sydatabases where name = tempdb or
select * from sysengines
This is Sybase TS method of removing most activity from the master device :
go
4> go
6> go
8> go
Most people use the sa account all of the time, which is fine if there is only ever one dba
administering the sytem. If you have more than one person accessing the server using the sa
account, consider using sa_role enabled accounts and disabling the sa account. Funnily
enough, this is obviously what Sybase think because it is one of the questions in the certification
exams.
If you see that some is logged using the sa account or is using an account with sa_role
enabled, then you can do the following:
go
go
sp_password null,newPassword
go
97. What are the 4 isolation levels, which was the default one?
Level 3 serializable
In chained mode the server executes an implicit begin tran, where as in unchained mode an
explicit begin tran is required.
99. dump transaction with standby_access is used to?
Guys, I have collected some Sybase interview questions from the folks who attended Morgan
Stanley, Mumbai interview recently. Please try to post correct answers so that everyone benefits
from this.
2.How to check the query plan and how to get the query plan without executing the query?
3. Diff .Clustered and non-clustered and when to create them. Number of clustered / non
clustered indexes that can be created on a specific table?
4.Types of locks in sybase, Is shared on shared lock, shared on exclusive, exclusive on exclusive
lock possible?
6.What is the default isolation level in sybase and what is the purpose of using isolation levels?
14.If the table doesnt have an index, will Sybase allow to create a updatable view on it?
5. Print deadlock information to sybase log but this can degrade sybase performance.
6. Default isolation level is 1. isolation levels specifies the kinds of interactions that are not
permitted while concurrent transactions are executingthat is, whether transactions are isolated
from each other, or if they can read or update information in use by another transaction. Sybase
supports 4 isolation levels level 0 (read uncommitted), level 1( read committed), level
2(repeatable read) and level 3( serializable read)
11. max file size limit gets exceeded if 10 million or more rows are bcpd out. To avoid that we
can use -F and -L options of bcp utility to take bcp out to multiple files
13. with check option restricts the rows that can be updated or inserted on the where clause
from authors
where state = CA
15. Only one table can be updated at a time and view has no with check option
7. If memory is ample then Joins are preferrable. Join has a better performance over sub-queries
as subqueries involves creation of intermediate tables and more I/O.
1 ) Using the below mentioned query,You can find the duplicate values:
2) We can use the sp_lock and sp_familylock to see the locks are avaliable in database.
3) select *from sysprocesses ( Here we can the cpu utilization,Engine number and Blocked
processes) or sp_who
4) If you want to improve the performance of this query ,we have to create index on the same.
5) We can analyze the table using the query plan of that table (sp_showplan)
6) Need to Check
-
1. What databases are created in Sybase by default when installed?
#abc, tempdb..abc
4. What happens exactly when the sybase server is bounced? How are the tempdb.. tables
dropped?
???
Performance Tuning:
and goes on in infinite join, Sybase deffers the update to table until all rows are scanned. I
think it stored the intermediary
13. How to get query plan? How to get the query plan if I dont want to execute the query?
SET SHOWPLAN ON
SET NOEXEC ON
SET FMTONLY ON
@@error variable not equal to zero when there is a error in the just executed SQL.
15. How will you pass the error message from stored procedure to the application program?
http://manuals.sybase.com/onlinebooks/group-
as/asg1250e/sqlug/@Generic__BookTextView/53713;pt=52735/*
17. How does sybase internally manages a transaction?
18. In a nested transaction, if you issue a rollback at the end all transactions are rolled back. How
does sybase do this?
???
http://infocenter.sybase.com/help/index.jsp?
topic=/com.sybase.dc20021_1251/html/locking/X25549.htm
???
20. How do you define what lock to be applied when defining a table?
21. What is the difference between Row level, Page level, Table level locks? Which is preferred?
???
22. What is the default locking scheme is Sybase? Why Sybase decide to use this?
??? Sybase expects a row in page then creates a page lock till finds the actual row, then
creates a exclusive lock.
24. Which lock should be used? Which is faster (or something like that he asked)?
??? For wide data range selection or update it is page level lock, else row level locking.
25. If monitoring tool is not installed how will you indentify the slow sql in a application?
AQP?
27. How will you apply AQP to a query within a stored procedure?
??
28. What are the tools available in Sybase for performance tuning?
29. What are indexs and types? Diff between clustered and non-clustered index?
http://www.sybaseteam.com/showthread.php?tid=405
Every time the sp is executed a new set of query plan is created. Used when data in the
tables of
sp change drastically/dynamically.
??? Causes each stored procedure and trigger that uses the named table to be recompiled the
next time it runs.
sp_recompile objname
33. What are the advantages of views?
abstraction; not all data of the same table can be shown to the user.
http://sqlserverpedia.com/wiki/Views_-_Advantages_and_Disadvantages
35. When a new column is added to the table and there is a view with that table say select *
from table,
when you execute the view will it include the new column?
No, because the select * would internally get expanded to individual columns and hence
view will not know about the new column.
36. When a user manually update a column, say flag, in the table (there may be many other
columns), then it should be validated?
Trigger
on emp
for update
as
begin
if exists (select 1 from deleted D, inserted I where D.flag != I.flag)
BEGIN
END
38. What are different BCP types? What are the options available?
??? Fast Bcp (removing triggers, indexes on table and then bcp) and slow bcp
39. What is a batch option in BCP? When -b option is not given and when you bcp in 4 million
records what happens?
??? Transaction log blows as bcp is logged operation. Long open transaction also creates
problem.
40. What happens exactly when a BCP with batch option is done?
???
41. What is the use of identity column? Can we give our own value? Do you know of identity
gaps?
???sequential entry by sybase, yes we can give our own value. some rows deleted in
between.
42. UNION and UNION ALL? What is the difference which is faster?
43. What is correlated sub query? What happens exactly in a correlated sub query?
45. How can I ignore duplicates while loading data through BCP?
47. What are the system tables have you seen so far?
UNIX
grep
3. What is SED?
ps
SQL is executed
Task is put to sleep pending lock acquisition and logical or physical I/O
Status Values
Reported by
sp_who
Only a System Administrator can issue the kill command: permission to use it cannot be
transferred.
T-SQL Query to get all the tables and lock scheme info.
The following Query gives list of all the User Tables and locking scheme of the Table.
Q1: Please let me know system db names, what is the purpose of sybsystemdb?
Q2: Suppose our tempdb is filling up or filled up, you cant recycle the db server, then what
would be your steps?
Q3: Business Team(AD) is reporting the query slow performance, how will you investigate, pls
consider all case. (Hint: memory, stats, indexes,reorg,locks etc)
Q4: Suppose our temdb is not recovered ,can we create new database?
Q5: We have configured 7 dataserver engines for our PROD server(we have sufficient cpus),
still we are facing the performance hit? Possible root cause?
Q6: Suppose we are doing the ASe 15 upgrade by dump & load , and in 12.5 server having 2000
logins. Since syslogins having different table structure in both enviorment, we cant use bcp, how
will we move these logins from 12.5 to 15.0?
Q8: What is your orgs backup policy, what is dump tran with standby_access?
Q11: What is the bypass recovery, when we require the bypass recovery?
Q12: What is the difference between shutdown and shutdown with no_wait, besides the
immediate shutdown difference.
Q13: Suppose In our one database huge trans are going on, we issued the shutdown with
no_wait . Will it hit the server restart and how?
Q14: Whats the named data cache, what is buffer pooling and how the cache hit effects the
system performance ?
Q15: We are getting stack traces for one of our databases? How will you investigate?
Q20: Can we run the update stat on one table one two step(half table in first time and after that
rest half of table)?
5. Whats the diff between role and group and which one is better?
6. How can we sync the logins from prod to uat server, how many tables we need take care for
the login sync?
10. Explain syslogins syssrvroles, sysloginroles and sysroles and whts the linkup among all?
12. During the refresh from PROD -> UAT env,tables which we require to take care?
17. What is guest user in database and why we require guest user?
20. Can we include password history feature? From which version it is avilable and how can we
do that?
21. Can we include one sql proc which exceute during login and how can we do that?
1. How can we get the compression level information from the dump files?
6. Give the two benefits for creating the database using for load option?
7.What are new features of the Sybase 15? And let me know which you are using in your day to
day operations?
8. What is the joining order in ASE ( suppose we have 4-5 tables with different size)?
Replication Server:
Q1: How can we know, the current ASE and Replication Server Setup is warm standby setup or
not?
Q4: In how many ways we can know the tran details which is causing the thread down?
Q5 : Pls explain the functionality of rep server starting from PDB logs to RDB
Q6: What is the diff between DSI and DSI EXEC thread?
Q8: Suppose our queues are filling up, in next 2 hrs 100% would be fill, how will you investigate
and steps for troubleshooting?
Q9: How can we know RSSD server name from replication Server?
New Questions:
How can we check the current replication setup whether it is WS , table level or db level?
What would be the impact of long running tran running in PDB in whole replication setup?
What is dbcc settruc ltm, valid/ignore? When we use this dbcc command?
What is rs_subcmp?
4. Replication queues are filling up, Where we need to look into for root cause?
6. In an table level replication setup, we need to alter a coloum, what would be the step for the
same?
7. Suppose there is size mismatch between table data and replication def between cols? What
will happen?
8. How can we refresh a database in the replication enviorment?
10. How can we do the master database replication? Is it possible? What information we can
replicate?
BY DAVIDVANDESOMPELE
SQL cursors have been a curse to database programming for many years because of their poor
performance. On the other hand, they are extremely useful because of their flexibility in allowing
very detailed data manipulations at the row level. Using cursors against SQL Server tables can
often be avoided by employing other methods, such as using derived tables, set-based queries,
and temp tables. A discussion of all these methods is beyond the scope of this article, and there
are already many well-written articles discussing these techniques.
The focus of this article is directed at using non-cursor-based techniques for situations in which
row-by-row operations are the only, or the best method available, to solve a problem. Here, I will
demonstrate a few programming methods that provide a majority of the cursors flexibility, but
without the dramatic performance hit.
Lets begin by reviewing a simple cursor procedure that loops through a table. Then well
examine a non-cursor procedure that performs the same task.
go
AS
/*
** Cursor method to cycle through the Customer table and get Customer Info for each iRowId.
**
** Revision History:
**
** Date Name Description Project
**
**
*/
SET NOCOUNT ON
@vchCustomerName nvarchar(255),
@vchCustomerNmbr nvarchar(10)
SELECT iRowId,
vchCustomerNmbr,
vchCustomerName
FROM CustomerTable
OPEN Customer
WHILE @@Fetch_Status = 0
BEGIN
END
CLOSE Customer
DEALLOCATE Customer
RETURN
BY DAVIDVANDESOMPELE
As you can see, this is a very straight-forward cursor procedure that loops through a table called
CustomerTable and retrieves iRowId, vchCustomerNmbr and vchCustomerName for every
row. Now we will examine a non-cursor version that does the exact same thing:
go
AS
/*
** Non-cursor method to cycle through the Customer table and get Customer Info for each
iRowId.
**
** Revision History:
**
**
*/
SET NOCOUNT ON
@iNextRowId int,
@iCurrentRowId int,
@iLoopControl int,
@vchCustomerName nvarchar(255),
@vchCustomerNmbr nvarchar(10)
@chProductNumber nchar(30)
Initialize variables!
SELECT @iLoopControl = 1
FROM CustomerTable
IF ISNULL(@iNextRowId,0) = 0
BEGIN
RETURN
END
@vchCustomerNmbr = vchCustomerNmbr,
@vchCustomerName = vchCustomerName
FROM CustomerTable
WHILE @iLoopControl = 1
BEGIN
processing.
FROM CustomerTable
IF ISNULL(@iNextRowId,0) = 0
BEGIN
BREAK
END
get the next row.
@vchCustomerNmbr = vchCustomerNmbr,
@vchCustomerName = vchCustomerName
FROM CustomerTable
END
RETURN
For performance reasons, you will generally want to use a column like iRowId as your basis
for looping and row retrieval. It should be an auto-incrementing integer data type, along with
being the primary key column with a clustered index.
There may be times in which the column containing the primary key and/or clustered index is not
the appropriate choice for looping and row retrieval. For example, the primary key and/or
clustered index may have already been built on a column using uniqueindentifier as the data
type. In such a case, you can usually add an auto-increment integer data column to the table and
build a unique index or constraint on it.
The MIN function is used in conjunction with greater than > to retrieve the next available
iRowId. You could also use the MAX function in conjunction with less than < to achieve the
same result:
FROM CustomerTable
Be sure to reset your looping variable(s) to NULL before retrieving the next @iNextRowId
value. This is critical because the SELECT statement used to get the next iRowId will not set the
@iNextRowId variable to NULL when it reaches the end of the table. Instead, it will fail to
return any new values and @iNextRowId will keep the last valid, non-NULL, value it received,
throwing your procedure into an endless loop. This brings us to the next point, exiting the loop.
When @iNextRowId is NULL, meaning the loop has reached the end of the table, you can use
the BREAK command to exit the WHILE loop. There are other ways of exiting from a WHILE
loop, but the BREAK command is sufficient for this example.
You will notice that in both procedures I have included the comments listed below in order to
illustrate the area in which you would perform your detailed, row-level processing.
processing.
Quite obviously, your row level processing will vary greatly, depending upon what you need to
accomplish. This variance will have the most profound impact on performance.
For example, suppose you have a more complex task which requires a nested loop. This is
equivalent to using nested cursors; the inner cursor, being dependent upon values retrieved from
the outer one, is declared, opened, closed and deallocated for every row in the outer cursor.
(Please reference the DECLARE CURSOR section in SQL Server Books Online for an example
of this.) In such a case, you will achieve much better performance by using the non-cursor
looping method because SQL is not burdened by the cursor activity
go
AS
/*
** Non-cursor method to cycle through the Customer table ** and get Customer Name for each
iCustId. Get all
** products for each iCustid.
**
** Revision History:
**
**
*/
SET NOCOUNT ON
@iNextCustRowId int,
@iCurrentCustRowId int,
@iCustLoopControl int,
@iNextProdRowId int,
@iCurrentProdRowId int,
@vchCustomerName nvarchar(255),
@chProductNumber nchar(30),
@vchProductName nvarchar(255)
Initialize variables!
SELECT @iCustLoopControl = 1
FROM Customer
IF ISNULL(@iNextCustRowId,0) = 0
BEGIN
SELECT No data in found in table!
RETURN
END
@vchCustomerName = vchCustomerName
FROM Customer
WHILE @iCustLoopControl = 1
BEGIN
FROM CustomerProduct
IF ISNULL(@iNextProdRowId,0) = 0
BEGIN
END
ELSE
BEGIN
@chProductNumber = chProductNumber,
@vchProductName = vchProductName
FROM CustomerProduct
END
BEGIN
FROM CustomerProduct
@chProductNumber = chProductNumber,
@vchProductName = vchProductName
FROM CustomerProduct
END
FROM Customer
IF ISNULL(@iNextCustRowId,0) = 0
BEGIN
BREAK
END
@vchCustomerName = vchCustomerName
FROM Customer
WHERE iCustId = @iNextCustRowId
END
RETURN
In the above example we are looping through a customer table and, for each customer id, we are
then looping through a customer product table, retrieving all existing product records for that
customer. Notice that a different technique is used to exit from the inner loop. Instead of using a
BREAK statement, the WHILE loop depends directly on the value of @iNextProdRowId. When
it becomes NULL, having no value, the WHILE loop ends.
Conclusion
SQL Cursors are very useful and powerful because they offer a high degree of row-level data
manipulation, but this power comes at a price: negative performance. In this article I have
demonstrated an alternative that offers much of the cursors flexibility, but without the negative
impact to performance. I have used this alternative looping method several times in my
professional career to the benefit of cutting many hours of processing time on production SQL
Servers.
(Remember to run SET NOEXEC ON last because if you run it first the SET SHOWPLAN
ON statement will, of course, not be run!)
1> SET SHOWPLAN ON
3> GO
1> SELECT *
5> GO
STEP 1
FROM TABLE
users
Nested iteration.
Table Scan.
Forward scan.
FROM TABLE
post
Nested iteration.
Table Scan.
Forward scan.
As you can see from the output we have a table scan on BOTH tables! YIKES! This will cause
some problems as your tables start to fill up with information.
2> GO
1> SELECT *
3> users
5> GO
STEP 1
FROM TABLE
post
Nested iteration.
Table Scan.
Forward scan.
FROM TABLE
users
Nested iteration.
Index : userid
Forward scan.
Positioning by key.
Keys are:
userid ASC
As you can see, the users table is now using the index your created. The reason why post is a
table scan is because you are selecting all rows so an index wont help you at all. A more
complex WHERE clause which uses more columns from post would require an index to avoid
the table scan.
To turn off NOEXEC And SHOWPLAN simply reverse the first command:
3> GO
QUERY PLAN FOR STATEMENT 1 (at line 1).
STEP 1
STEP 1
1>
Code:
SET SHOWPLAN ON
SET FMTONLY ON
GO
EXEC sp_something
GO
To check what exactly is executed at the server level when frontend user kicks of a report or any
application module, use:
dbcc traceon(11201,11202,11203,11204,11205,11206)
It produces huge output in errorlog. Make sure to turn it off when the job is done.
1. use disk init to create the new log device for your database
4. dump tran with truncate_only- to make sure we clear any log that might remain on the
data device
5. use sp_helplog to make sure that the log starts on the log device