Saturday, February 24, 2007

SQL CASE

CASE expressions bring a vast degree of power and control to SQL Server programmers. A working knowledge of CASE expressions can make accessing and updating SQL Server data easier and perhaps, even more efficient. Additionally, CASE expressions enable more work to be accomplished using a single SQL statement, which should also improve efficiency and decrease development time. As such, CASE expressions should be a component of every SQL Server developer’s arsenal of programming techniques.

select title, price, Budget = CASE
WHEN price > 20.00 THEN 'Expensive'
WHEN price BETWEEN 10.00 AND 19.99 THEN 'Moderate'
WHEN price < 10.00 THEN 'Inexpensive'
ELSE 'Unknown'

END
FROM title


Result
title price budget
-------------------------------------------------------------------------------- --------- -----------
The Busy Executive's Database Guide 19.99 Moderate
Cooking with Computers: Surreptitious Balance Sheets 11.95 Moderate
You Can Combat Computer Stress! 2.99 Inexpensive
Straight Talk About Computers 19.99 Moderate
Silicon Valley Gastronomic Treats 19.99 Moderate
The Gourmet Microwave 2.99 Inexpensive
The Psychology of Computer Cooking NULL Unknown
But Is It User Friendly? 22.95 Expensive
Secrets of Silicon Valley 20.00 Unknown
Net Etiquette NULL Unknown
Computer Phobic AND Non-Phobic Individuals: Behavior Variations 21.59 Expensive
Is Anger the Enemy? 10.95 Moderate
Life Without Fear 7.00 Inexpensive
Prolonged Data Deprivation: Four Case Studies 19.99 Moderate
Emotional Security: A New Algorithm 7.99 Inexpensive
Onions, Leeks, and Garlic: Cooking Secrets of the Mediterranean 20.95 Expensive
Fifty Years in Buckingham Palace Kitchens 11.95 Moderate
Sushi, Anyone? 14.99 Moderate



SQL Statement #2


select top(10) au_lname, au_fname, isSocalMan=
case state when 'CA' then 'Yes, he is.'
else 'No, he is not' end
from authors


au_lname au_fname isSocalMan
---------------------------------------- -------------------- -------------
White Johnson Yes, he is.
Green Marjorie Yes, he is.
Carson Cheryl Yes, he is.
O'Leary Michael Yes, he is.
Straight Dean Yes, he is.
Smith Meander No, he is not
Bennet Abraham Yes, he is.
Dull Ann Yes, he is.
Gringlesby Burt Yes, he is.
Locksley Charlene Yes, he is.
(10 row(s) affected)


SQL Statement #3



use northwind
go
SELECT
EmployeeID,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 1 THEN Freight ELSE 0 END) AS jan,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 2 THEN Freight ELSE 0 END) AS feb,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 3 THEN Freight ELSE 0 END) AS mar,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 4 THEN Freight ELSE 0 END) AS apr,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 5 THEN Freight ELSE 0 END) AS may,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 6 THEN Freight ELSE 0 END) AS jun,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 7 THEN Freight ELSE 0 END) AS jul,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 8 THEN Freight ELSE 0 END) AS aug,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 9 THEN Freight ELSE 0 END) AS sep,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 10 THEN Freight ELSE 0 END) AS oct,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 11 THEN Freight ELSE 0 END) AS nov,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 12 THEN Freight ELSE 0 END) AS dec
FROM orders
GROUP BY EmployeeID
go


EmployeeID jan feb mar apr may jun jul aug sep oct nov dec
------------------------------------------------------------------------------ --------------------- --------------------- --------------------- ---------------------
1 969.88 371.06 856.37 1092.84 599.10 129.89 703.69 1226.63 352.01 720.55 704.70 1035.46
2 907.20 400.65 894.20 2368.82 376.87 545.15 298.73 182.74 430.22 1129.68 278.44 636.96
3 924.92 1644.45 1237.53 772.87 1290.63 1570.36 213.81 295.83 381.62 556.12 773.17 1223.43
4 703.92 2279.89 811.89 960.03 371.34 245.27 594.36 1332.21 818.09 1336.58 371.08 1161.78
5 708.31 621.09 201.94 8.80 237.93 127.34 314.58 10.76 283.91 160.98 328.19 914.88
6 328.57 75.03 372.72 785.08 107.47 260.54 41.59 394.19 261.35 385. 30 321.27 373.61
7 1035.48 436.37 697.20 1416.70 427.74 89.16 108.68 713.51 585.20 528. 22 492.69 33.80
8 838.61 435.37 1278.18 1221.37 619.74 55.72 81.69 828.82 190.84 1018. 74 388.16 429.46
9 234.20 615.34 58.09 945.26 32.33 210.53 294.87 0.00 238.08 520.89 130.44 15.09


CASE has two formats:
The simple CASE function compares an expression to a set of simple expressions to determine the result.
The searched CASE function evaluates a set of Boolean expressions to determine the result.
Both formats support an optional ELSE argument.
Syntax
CASE input_expression
WHEN when_expression THEN result_expression
[ ...n ]
[
ELSE else_result_expression
]
END


CASE
WHEN Boolean_expression THEN result_expression
[ ...n ]
[
ELSE else_result_expression
]
END


Arguments
input_expression
The expression evaluated when you use the simple CASE format. The input_expression argument is any valid expression in Microsoft SQL Server 2005 Mobile Edition (SQL Server Mobile).

WHEN when_expression
A simple expression to which input_expression is compared when you use the simple CASE format. The when_expression argument is any valid SQL Server expression. The data types of input_expression and each when_expression must be the same or must be implicitly converted.

n
A placeholder indicating that multiple WHEN when_expression THEN result_expression clauses, or multiple WHEN Boolean_expression THEN result_expression clauses can be used.

THEN result_expression
The expression returned when input_expression equals when_expression evaluates to TRUE, or Boolean_expression evaluates to TRUE. The result expression argument is any valid SQL Server expression.

ELSE else_result_expression
The expression returned if no comparison operation evaluates to TRUE. If this argument is omitted and no comparison operation evaluates to TRUE, CASE returns NULL. The else_result_expression argument is any valid SQL Server expression. The data types of else_result_expression and any result_expression must be the same or must be an implicit conversion.

WHEN Boolean_expression
The Boolean expression evaluated when you use the searched CASE format. Boolean_expression is any valid Boolean expression.

Result Types
Returns the highest precedence type from the set of types in result_expressions and the optional else_result_expression.

Return Value
Simple CASE function:

  • Evaluates input_expression, and then, in the order specified, evaluates input_expression = when_expression for each WHEN clause.
  • Returns the result_expression of the first (input_expression = when_expression) that evaluates to TRUE.
  • If no input_expression = when_expression evaluates to TRUE, else_result_expression is returned if an ELSE clause is specified, or NULL if no ELSE clause is specified.

Searched CASE function:

  • Evaluates, in the order specified, Boolean_expression for each WHEN clause.
  • Returns result_expression of the first Boolean_expression that evaluates to TRUE.
  • If no Boolean_expression evaluates to TRUE, else_result_expression is returned if an ELSE clause is specified, or NULL if no ELSE clause is specified.

Example
The following example returns the mode of shipping used for orders placed.
SELECT ShipVia, CASE ShipVia
WHEN 1 THEN 'A.Datum'
WHEN 2 THEN 'Contoso'
WHEN 3 THEN 'Consolidated Messenger'
ELSE 'Unknown'
END
FROM Orders


Using CASE Expressions When Modifying Data

UPDATE titles SET price = CASE
WHEN (price < 5.0 AND ytd_sales > 999.99) THEN price * 1.25
WHEN (price < 5.0 AND ytd_sales < 1000.00) THEN price * 1.15
WHEN (price > 4.99 AND ytd_sales > 999.99) THEN price * 1.2
ELSE price

END

The bottom line is that CASE expressions are very powerful, yet often neglected.
Favor using CASE expressions under the following circumstances:

  • when data needs to be converted from one type to another for display and no
    function exists to accomplish the task

  • when a summary row needs to be created from detail data

  • when conditional processing needs to be executed to determine results

  • when using UNION to “glue together” different subsets of a single table


One final usage guideline is to use the COALESCE function with your CASE expressions when you wish to avoid NULLs. Consider an employee table that contains three columns for wages: salary, commission, and retainer_fee. Any single employee can only receive one of these types of wages. So, two of the columns will contain NULLs, but one will always contain a value. The following statement uses the COALESCE function to return only the non-NULL value:


SELECT
COALESCE(salary, commission, retainer_fee, 0)

FROM
employee


The COALESCE function will go through the list of expressions (in this case columns and constants) and return the first non-NULL value encountered. In this case, the numeric constant 0 is added at the end of the list to provide a default value if all of the columns should
happen to be NULL.

Wednesday, February 14, 2007

Not All SQL Server Indexes Are Created Equal



Not All SQL Server Indexes Are Created Equal


by Brad M. McGehee





If you have much experience with indexes at all, you are probably already familiar with the difference between clustered and non-clustered indexes. But this article is not about them. This article is about whether or not the SQL Server Query Optimizer will use your carefully crafted indexes. You may not be aware of this, but just because a column has an index doesn뭪 mean the Query Optimizer will use it. As you can imagine, creating an index that never will be used is a waste of time, and in the worst cases, it can even reduce the performance of your application. Let뭩 learn why.



To start out, let뭩 look at a simple example. Let뭩 assume we have an accounting database. In that database is a table called 뱋rders? Among a number of different columns in this table, we are interested in two columns: 뱋rderid?and 밻mployeeid? This table has 150,000 rows and there is non-clustered index on the 밻mployeeid?table. Now let뭩 say we want to run the following query:



SELECT orderid FROM orders WHERE employeeid = 5



The first thing to notice about the query is that the 밻mployeeid?column used in the WHERE clause of the query has a non-clustered index on it. Because of this, you would most likely assume that when this query is run through the Query Optimizer, that the Query Optimizer would use the index to produce the requested results.



Unfortunately, you can뭪 automatically make this assumption. Just because there is an available index doesn뭪 necessarily mean that the Query Optimizer will use it. This is because the Query Analyzer always evaluates whether or not an index is useful before it is used. If the Query Analyzer examines an index and finds it not useful, it will ignore it, and if necessary, it will perform a table scan to produce the requested results.



So what is a useful index? In order to answer this question, we need to understand that one of the biggest underlying goals of the Query Optimizer is to reduce the amount of I/O, and the corresponding amount of time it takes to perform execute a query. In other words, the Query Optimizers evaluates many different ways a query can be executed, and finds the one it thinks will produce the least amount of I/O. But what may be surprising is that using an available index does not always mean that it will result is the least amount of I/O used. In many cases, especially with non-clustered indexes, a table scan can produce less I/O than an available index.



Before the Query Optimizer uses an index, the Query Optimizer evaluates the index to see if it is selective enough. What does this mean? Selectivity refers to the percentage of rows in a table that are returned by a query. A query is considered highly selective if it returns a very limited number of rows. A query is considered to have low selectivity if it returns a high percentage of rows. Generally speaking, if a query returns
less than 5% of the number of rows in a table, it is considered to have high selectivity, and the index will most likely be used. If the query returns from 5% - 10% of the rows, the index may or may not be used. If the query returns more than 10% of the rows, the index most likely will not be used. And assuming there are no other useful indexes for the query, a table scan will be performed.



Let뭩 return to our example query:



SELECT orderid FROM orders WHERE employeeid = 5



Just by looking at the query we don뭪 know if the available index on employeeid will be used or not. Let뭩 say that we know that of the 150,000 rows in the table, that 밻mployeeid = 5?is true for 5,000 of the records. If we divide 5,000 by 150,000 we get 3.3%. Since 3.3% is less than 5%, the Query Optimizer will most likely use the available index. But what if 밻mployeeid = 5?is true for 25,000 instead. In this case, we divide 25,000 by 150,000 and we get 16.6%. Since 16.6% is greater than 5%, or even 10%, the Query Optimizer most likely will not use the index and instead perform a table scan.



So how can a table scan use less I/O than using an index, such as the non-clustered index in our example? Non-clustered indexes are great if the index is highly selective, especially if you will be returning one record. But if many records will be returned, and the index is not very selective, it is very expensive in I/O to retrieve the data. The reason for this is that the Query Optimizer has to first go to the index to locate the data (using up I/O) and then it has to go to the table to retrieve it (more I/O). At some point, the Query Optimizer determines that it takes less I/O to scan an entire table than it does to go back and forth between the index and the table to retrieve the requested rows.



The example given above applies mostly to non-clustered indexes. If the available index is a clustered index, then the index may be used, even if there is low selectivity, because the index is the table and I/O operations can be very efficient.



So how does the Query Optimizer know if an available index is selective enough to be useful? It does this by maintaining index statistics on each index in every table. Index statistics are a histogram of values that are stored in the sysindexes table. These statistics are a sampling of the available rows that tells the Query Optimizer approximately how selective a particular index is.



Index statistics are created every time an index is created, rebuilt, when the UPDATE STATISTICS command is run, and automatically by the Query Optimizer as the need arises. Index statistics are not maintained in real time because that would put too much overhead on the server. But because index statistics are not real time, they can get out of date, and sometimes, the Query Optimizer can make a poor choice because the index statistics it uses are not current.



But just because statistics are current doesn뭪 mean that the Query Optimizer will use an available index. Remember, the Query Optimizer bases its decision on the selectivity of an index, and the Query Optimizer uses the index statistics to determine selectivity.

So if the Query Optimizer can check to see if a particular index is useful or not, how can we do the same thing? Fortunately, there is a command that let뭩 us examine an index and find out if a particular index is selective enough to be used.



The reason we want to know if an index is selective enough or not is because if it isn뭪, then it won뭪 be used. And if an index won뭪 be used, there is no point in having it. Most likely, dropping unnecessary indexes can boost the performance of your application because indexes, as you probably know, slow down INSERTs, UPDATEs, and DELETEs in a table because of the overhead of maintaining indexes. And if the table in questions is subject to a high level of database changes, index maintenance can be the cause of some bottlenecks. So our goal is to ensure that if we do have indexes, that they are selective enough to be useful. We don뭪 want to maintain indexes that won뭪 be used.



The command we will use to find the selectivity of an index is:



DBCC SHOW_STATISTICS (table_name, index_name)




When this command runs, it produces an output similar to the following. This is a real result based on one of the databases I maintain.



Statistics for INDEX 'in_tran_idx'.



Updated
Rows Rows Sampled Steps Density Average key length

-------------------- ------- ------------ ------ ------------ ------------------

Feb 24 2001 3:36AM 7380901 7163688
300 2.2528611E-5 0.0



(1 row(s) affected)



All density Columns

------------------------

2.2528611E-5 in_tran_key



Steps

-----------

1

283

301

340

371

403

456

?br>
44510



(300 row(s) affected)




DBCC execution completed. If DBCC printed error messages, contact your system administrator.



This result includes a lot of information, most of which is beyond the scope of this article. What we one to focus on is the density value ?.2528611E-5?under the 밃ll density?column heading.



Density refers to the average percentage of duplicate rows in an index. If an indexed column, such as employeeid, has much duplicate data, then the index is said the have high density. But if an indexed column has mostly unique data, then the index is said to have low density.



Density is inversely related to selectivity. If density is a high number, then selectively is low, which means an index may not be used. If density is a low number, then selectivity is high, and an index most likely will be used.



In the sample printout above, the density for the index is less than 1%. In turn, this means the selectivity of the table is over 99%, which means that the index is probably very useful for the Query Optimizer.



If you are an advanced DBA, you will probably have already noticed that I have oversimplified this discussion. Even so, the point I want to make in this article is still very valid, and my point is, is that not all indexes are equal. Just because an index is available to the Query Optimizer does not mean it will always be used.



For DBAs, this means you need to be wary of your table뭩 indexes. As time permits, you may want to run the DBCC SHOW_STATISTICS command and see how selective your indexes actually are. You may find that some of your indexes are not being used. And if this is the case, you may want to consider removing them, which in turn may speed up your application.



For new DBAs, removing, rather than adding indexes may seem a backward way to performance tune your database. But the more you learn about SQL Server works internally, the better you will understand the limits of using indexes to performance tune your applications.



To read about this subject in much greater depth, see chapter 15, 밫he Query Processor?in the book, Inside Microsoft SQL Server 2000, by Kalen Delaney.

Sunday, February 4, 2007

MSSQL Cursors


General Concepts

Declaring a Cursor

  • SQL-92 Syntax

  • Transact-SQL Extended Syntax

  • Opening a Cursor

    Fetching a Cursor

    Closing a Cursor

    Deallocating a Cursor

    Cursor Optimization Tips

    Literature




    General concepts


    In this article, I want to tell you how to create and use server
    side cursors and how you can optimize a cursor performance.

    Cursor is a database object used by applications to manipulate data
    in a set on a row-by-row basis, instead of the typical SQL commands
    that operate on all the rows in the set at one time. For example,
    you can use cursor to include a list of all user databases and make
    multiple operations against each database by passing each database
    name as a variable.

    The server side cursors were first added in SQL Server 6.0 release and
    now supported in all editions of SQL Server 7.0 and SQL Server 2000.

    Before using cursor, you first must declare the cursor. Once a cursor
    has been declared, you can open it and fetch from it. You can fetch
    row by row and make multiple operations on the currently active row
    in the cursor. When you have finished working with a cursor, you
    should close cursor and deallocate it to release SQL Server resources.

    Declaring a Cursor


    Before using cursor, you first must declare the cursor, i.e. define
    its scrolling behavior and the query used to build the result set on
    which the cursor operates. To declare cursor, you can use a syntax
    based on the SQL-92 standard and a syntax using a set of Transact-SQL extensions.

    SQL-92 Syntax


    This is SQL-92 Syntax:


    DECLARE cursor_name [INSENSITIVE] [SCROLL] CURSOR
    FOR select_statement
    [FOR {READ ONLY UPDATE [OF column_name [,...n]]}]


    where

    cursor_name - the name of the server side cursor, must contain
    from 1 to 128 characters.

    INSENSITIVE - specifies that cursor will use a temporary copy of the data
    instead of base tables. This cursor does not allow modifications
    and modifications made to base tables are not reflected in the
    data returned by fetches made to this cursor.

    SCROLL - specifies that cursor can fetch data in all directions, not only
    sequentially until the end of the result set. If this argument is
    not specified, FETCH NEXT is the only fetch option supported.

    select_statement - the standard select statement, cannot contain
    COMPUTE, COMPUTE BY, FOR BROWSE, and INTO keywords.

    READ ONLY - specifies that cursor cannot be updated.

    UPDATE [OF column_name [,...n]] - specifies that all cursor's columns can be
    updated (if OF column_name [,...n] is not
    specified), or only the columns listed in the
    OF column_name [,...n] list allow modifications.

    Cursor Options Compatibility











































    INSENSITIVESCROLLREAD ONLYUPDATE
    INSENSITIVEYesYesNo
    SCROLLYesYesYes
    READ ONLYYesYesNo
    UPDATENoYesNo


    Transact-SQL Extended Syntax


    This is Transact-SQL Extended Syntax:




    DECLARE cursor_name CURSOR
    [LOCAL GLOBAL]
    [FORWARD_ONLY SCROLL]
    [STATIC KEYSET DYNAMIC FAST_FORWARD]
    [READ_ONLY SCROLL_LOCKS OPTIMISTIC]
    [TYPE_WARNING]
    FOR select_statement
    [FOR UPDATE [OF column_name [,...n]]]


    where



    cursor_name - the name of the server side cursor, must contain
    from 1 to 128 characters.

    LOCAL - specifies that cursor can be available only in the batch,
    stored procedure, or trigger in which the cursor was created.
    The LOCAL cursor will be implicitly deallocated when the batch,
    stored procedure, or trigger terminates.

    GLOBAL - specifies that cursor is global to the connection. The GLOBAL
    cursor will be implicitly deallocated at disconnect.

    FORWARD_ONLY - specifies that cursor can only fetch data sequentially
    from the first to the last row. FETCH NEXT is the only
    fetch option supported.

    STATIC - specifies that cursor will use a temporary copy of the data
    instead of base tables. This cursor does not allow modifications
    and modifications made to base tables are not reflected in the
    data returned by fetches made to this cursor.

    KEYSET - specifies that cursor uses the set of keys that uniquely
    identify the cursor's rows (keyset), so that the membership and
    order of rows in the cursor are fixed when the cursor is opened.
    SQL Server uses a table in tempdb to store keyset. The KEYSET
    cursor allows updates nonkey values from being made through
    this cursor, but inserts made by other users are not visible.
    Updates nonkey values made by other users are visible as the
    owner scrolls around the cursor, but updates key values made
    by other users are not visible. If a row is deleted, an attempt
    to fetch the row returns an @@FETCH_STATUS of -2.

    DYNAMIC - specifies that cursor reflects all data changes made to the
    base tables as you scroll around the cursor. FETCH ABSOLUTE
    option is not supported with DYNAMIC cursor.

    FAST_FORWARD - specifies that cursor will be FORWARD_ONLY and READ_ONLY
    cursor. The FAST_FORWARD cursors produce the least amount
    of overhead on SQL Server.

    READ ONLY - specifies that cursor cannot be updated.

    SCROLL_LOCKS - specifies that cursor will lock the rows as they are
    read into the cursor to ensure that positioned updates
    or deletes made through the cursor will be succeed.

    OPTIMISTIC - specifies that cursor does not lock rows as they are
    read into the cursor. So, the positioned updates or
    deletes made through the cursor will not succeed if
    the row has been updated outside the cursor since
    this row was read into the cursor.

    TYPE_WARNING - specifies that if the cursor will be implicitly
    converted from the requested type to another,
    a warning message will be sent to the client.

    select_statement - the standard select statement, cannot contain
    COMPUTE, COMPUTE BY, FOR BROWSE, and INTO keywords.

    UPDATE [OF column_name [,...n]] - specifies that all cursor's columns can be
    updated (if OF column_name [,...n] is not
    specified), or only the columns listed in the
    OF column_name [,...n] list allow modifications.

    Opening a Cursor


    Once a cursor has been declared, you must open it to fetch data from it.
    To open a cursor, you can use the following syntax:


    OPEN { { [GLOBAL] cursor_name } cursor_variable_name}


    where

    GLOBAL - if this argument was not specified and both a global and
    a local cursor exist with the same name, the local cursor
    will be opened; otherwise, the global cursor will be opened.

    cursor_name - the name of the server side cursor, must contain
    from 1 to 128 characters.

    cursor_variable_name - the name of a cursor variable that
    references a cursor.

    After a cursor is opening, you can determine the number of rows
    that were found by the cursor. To get this number, you can use
    @@CURSOR_ROWS scalar function.

    Fetching a Cursor


    Once a cursor has been opened, you can fetch from it row by row and make
    multiple operations on the currently active row in the cursor.
    To fetch from a cursor, you can use the following syntax:


    FETCH
    [ [ NEXT PRIOR FIRST LAST
    ABSOLUTE {n @nvar}
    RELATIVE {n @nvar}
    ]
    FROM
    ]
    { { [GLOBAL] cursor_name } @cursor_variable_name}
    [INTO @variable_name[,...n] ]


    where

    NEXT - the default cursor fetch option. FETCH NEXT returns the
    next row after the current row.

    PRIOR - returns the prior row before the current row.

    FIRST - returns the first row in the cursor.

    LAST - returns the last row in the cursor.

    ABSOLUTE {n @nvar} - returns the nth row in the cursor. If a positive
    number was specified, the rows are counted from the
    top of the data set; if 0 was specified, no rows are
    returned; if a negative number was specified, the
    number of rows will be counted from the bottom of
    the data set.

    RELATIVE {n @nvar} - returns the nth row in the cursor relative to
    the current row. If a positive number was specified,
    returns the nth row beyond the current row; if a
    negative number was specified, returns the nth row
    prior the current row; if 0 was specified, returns
    the current row.

    GLOBAL - if this argument was not specified and both a global and
    a local cursor exist with the same name, the local cursor
    will be fetched; otherwise, the global cursor will be fetched.

    cursor_name - the name of the server side cursor, must contain
    from 1 to 128 characters.

    cursor_variable_name - the name of a cursor variable that
    references a cursor.

    INTO @variable_name[,...n] - allows data returned from the cursor
    to be held in temporary variables. The type
    of variables must match the type of columns
    in the cursor select list or support implicit
    conversion. The number of variables must match
    the number of columns in the cursor select list.

    Closing a Cursor


    When you have finished working with a cursor, you can close it to
    release any resources and locks that SQL Server may have used while
    the cursor was open.

    To close a cursor, you can use the following syntax:


    CLOSE { { [GLOBAL] cursor_name } cursor_variable_name }


    where

    GLOBAL - if this argument was not specified and both a global and
    a local cursor exist with the same name, the local cursor
    will be closed; otherwise, the global cursor will be closed.

    cursor_name - the name of the server side cursor, must contain
    from 1 to 128 characters.


    cursor_variable_name - the name of a cursor variable that
    references a cursor.

    Note. If you have closed a cursor, but have not deallocated it,
    you can open it again when needed.

    Deallocating a Cursor


    When you have finished working with a cursor and want to completely
    release SQL Server resources that were used by a cursor, you can
    deallocate a cursor.

    To deallocate a cursor, you can use the following syntax:


    DEALLOCATE { { [GLOBAL] cursor_name } @cursor_variable_name}


    where

    GLOBAL - if this argument was not specified and both a global and
    a local cursor exist with the same name, the local cursor will be
    deallocated; otherwise, the global cursor will be deallocated.

    cursor_name - the name of the server side cursor, must contain
    from 1 to 128 characters.

    cursor_variable_name - the name of a cursor variable that
    references a cursor.

    Note. Deallocating a cursor completely removes all cursor references.
    So, after a cursor is deallocated, it no longer can be opened.

    Cursor Optimization Tips



    • Try to avoid using SQL Server cursors, whenever possible.

      SQL Server cursors can results in some performance degradation in comparison
      with select statements. Try to use correlated subquery or derived tables, if
      you need to perform row-by-row operations.


    • Do not forget to close SQL Server cursor when its result set is not needed.

      To close SQL Server cursor, you can use CLOSE {cursor_name} command. This command
      releases the cursor result set and frees any cursor locks held on the rows on
      which the cursor is positioned.


    • Do not forget to deallocate SQL Server cursor when the data structures
      comprising the cursor are not needed.


      To deallocate SQL Server cursor, you can use DEALLOCATE {cursor_name} command.
      This command removes a cursor reference and releases the data structures comprising
      the cursor.


    • Try to reduce the number of records to process in the cursor.

      To reduce the cursor result set, use the WHERE clause in the cursor's
      select statement. It can increase cursor performance and reduce SQL Server
      overhead.


    • Try to reduce the number of columns to process in the cursor.

      Include in the cursor's select statement only necessary columns. It will
      reduce the cursor result set. So, the cursor will use fewer resources.
      It can increase cursor performance and reduce SQL Server overhead.


    • Use READ ONLY cursors, whenever possible, instead of updatable cursors.

      Because using cursors can reduce concurrency and lead to unnecessary locking,
      try to use READ ONLY cursors, if you do not need to update cursor result set.


    • Try avoid using insensitive, static and keyset cursors, whenever possible.

      These types of cursor produce the largest amount of overhead on SQL Server,
      because they cause a temporary table to be created in TEMPDB, which results
      in some performance degradation.


    • Use FAST_FORWARD cursors, whenever possible.

      The FAST_FORWARD cursors produce the least amount of overhead on SQL Server,
      because there are read-only cursors and can only be scrolled from the first
      to the last row. Use FAST_FORWARD cursor if you do not need to update cursor
      result set and the FETCH NEXT will be the only used fetch option.


    • Use FORWARD_ONLY cursors, if you need updatable cursor and the FETCH NEXT
      will be the only used fetch option.


      If you need read-only cursor and the FETCH NEXT will be the only used fetch
      option, try to use FAST_FORWARD cursor instead of FORWARD_ONLY cursor.
      By the way, if one of the FAST_FORWARD or FORWARD_ONLY is specified the
      other cannot be specified.


    Literature


    1. DECLARE CURSOR

    2. OPEN

    3. FETCH

    4. http://msdn.microsoft.com/library/en-us/tsqlref/ts_globals_1c1f.asp

    5. CLOSE

    6. DEALLOCATE