Tuesday, April 24, 2007

How to enable T-SQL debugging in Visual Studio .NET 2005

1. Right click on a user that you want to enable to debug and select a properties.

2. Select Server Roles and check sysadmin role.


3. You may run following query:


4. Open your Visual Studio .NET 2005 and open a Server Explorer.
Then right click on Data Connection and select Add Connection…



5. Then setup db connection.



6. Now expand your db from Visual Studio.net 2005 and right click a stored procedure to debug, then choose step into.




7. Now, you provide inputs for SP.




8. Then finally you are able to debug T-sql in Visual Studio .NET 2005.
Congratulations!!!










Error: User Could Not Execute Stored Procedure sp_enable_sql_debug

I found a hint from here:http://msdn2.microsoft.com/en-us/library/ms241735(VS.80).aspx

The Stored Procedure sp_enable_sql_debug could not execute on the server. This can be caused by:
A connection problem. You need to have a stable connection to the server.
Lack of necessary permissions on the server. To debug on SQL Server 2005, both the account running Visual Studio and the account used to connect to SQL Server must be members of the sysadmin role. The account used to connect to SQL Server is either your Windows user account (if you are using Windows authentication) or an account with user ID and password (if you use SQL authentication).


sp_addsrvrolemember 'Domain\domainusername', 'sysadmin'

Overview of T-SQL and CLR debugging in SQL Server 2005

From:http://blogs.msdn.com/sqlclr/archive/2006/06/29/651644.aspx

SQL Server 2005 ships with a new debugging engine that supports debugging of any T-SQL and CLR code running in the server, including batches, stored procedures, user defined functions/aggregates/triggers, etc. You can use Visual Studio 2005 to debug against SQL Server 2005 or SQL Server 2000, but you can not use Visual Studio 2003 or earlier to debug against SQL Server 2005 because the debugging engine is not compatible.

There are some improvements in debugging in SQL Server 2005:
. Its much easier to setup than debugging in SQL Server 2000. You can enable debugging SQL Server 2005 from Visual Studio 2005 by following these simple steps (please note that Remote Debugging Monitor mentioned in the steps is not required for T-SQL debugging).
. Integration of T-SQL debugging and CLR debugging. You can step in/out from T-SQL code to CLR code, or vise versa. You will get mixed call stack consisting of T-SQL frames and CLR frames, and inspect T-SQL and CLR variables on any frame.
. Full functionality of CLR debugging.
. Isolation of database connections in T-SQL debugging. When you break in T-SQL code in one connection, other connections are not suspended and can continue to run normally until they require resource locked by the connection being debugged. In another word, the impact of debugging a connection to other connections on the server is similar to have a long-running connection.

The easiest way to develop and debug CLR code running in SQL Server is to use a C# or VB SQL Server project in Visual Studio. When you deploy a SQL Server project, Visual Studio deploys the CLR assembly and its symbol file (.pdb) and source code files to the database so that the assembly is ready for execution and debugging.

When you have a CLR assembly built in other ways, you can also use CREATE ASSEMBLY statement to deploy it to the database. In this case you need to use ALTER ASSEMBLY ADD FILE to attach the symbol file (.pdb) and source code files to the assembly, and then you can use Visual Studio to debug the assembly. If you forget to add a symbol file or source file, step-in or breakpoint in the corresponding CLR code will be skipped by Visual Studio.

In Visual Studio there are 3 ways that you can start debugging of T-SQL or CLR code in the database:
. Direct Database Debugging (DDD). You can open Server Explorer and add a Data Connection to a SQL Server database. Then you can browse to any T-SQL or CLR object in the database, such as a stored procedure or function, right click on it, and select Step Into to begin DDD.
. Debugging from a SQL Server project. When you start debugging from a C# or VB SQL Server project (e.g. by pressing F5), the assembly will be built and deployed, and the default test script will be run in debugging mode. Breakpoints in the default test script or in any T-SQL or CLR code called by the default test script will be hit. You can also right click on any test script in the project and select Debug Script.
. Application debugging. You can use Visual Studio to attach to any client application that opens a database connection to SQL Server, e.g. SQL Management Studio, and then you can debug T-SQL or CLR code executed on this connection. When you attach to the client process, you need to make sure the debugging type includes T-SQL code, along with any other desired types such as native code and/or managed code. There are some notable limitations in application debugging:
. You have to attach to the client process before the database connection is opened. Any code run on the connections opened before attaching will be ignored.
. A Data Connection to the server being debugged must present in the Server Explorer. Connections made by the attached client to databases not listed in Data Connections in Server Explorer will not be debugged.
. You can not step into a T-SQL or CLR store procedure from client code (managed or native). Usually you need to set a breakpoint in the T-SQL or CLR code that you want to debug. To do this browse to the desired object in Server Explorer, double click on the object to open its source code, and set a breakpoint.

The following are some imitations of T-SQL and CLR debugging in SQL Server 2005:
. T-SQL debugging is on the statement level. You can not step into the execution of a select statement.
. Visual Studio 2005 is needed to debug T-SQL. Microsoft SQL Server Management Studio doesnt support debugging. Some third party tools that can be used to debug SQL Server 2005 may be released in the near future (or may have already been released).
. Break into CLR code in one connection freezes all connections that are running CLR code. CLR debugging doesnt have connection isolation that is available in T-SQL debugging, because of limitations in CLR debugging architecture.
. Debugging doesnt work with parallel compiled plan execution. When a batch or stored procedure being debugged is compiled to a parallel plan, debugging may not work. In this case you need to disable parallel plan generation to debug it.

Debugging stored procedures in Query Analyzer of MSSQL2000

Bottom line is that the DCOM configuration must include the tcp/ip protocol.

The debugger does not use the same connection as SQL Query Analyzer, which means that there is second connection for the debugger. It uses DCOM to make a connection to the server.

If you can't debug, run dcomcnfg, select the protocols tab, and make sure tcp/ip is in the list.

Wednesday, March 21, 2007

Splitting an integer list in TSQL

from:http://snippets.dzone.com/posts/show/774

IF EXISTS (SELECT * FROM dbo.sysobjects WHERE id = object_id(N'[dbo].[uspSplitIntegerList]') AND OBJECTPROPERTY(id, N'IsProcedure') = 1)
DROP PROCEDURE [dbo].[uspSplitIntegerList]
GO

SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO


/* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */
-- uspSplitIntegerList
--
-- Description:
-- splits a comma separated list of integers and returns the integer list
--
-- Arguments:
-- @list_integers - list of integers
--
-- Notes:
-- 02/22/2006 - WSR : use DATALENGTH instead of LEN throughout because LEN doesn't count trailing blanks
--
-- History:
-- 02/22/2006 - WSR : revised algorithm to account for items crossing 8000 character boundary
--
CREATE PROCEDURE uspSplitIntegerList
@list_integers text
AS

SET NOCOUNT ON

DECLARE @InputLen integer -- input text length
DECLARE @TextPos integer -- current position within input text
DECLARE @Chunk varchar(8000) -- chunk within input text
DECLARE @ChunkPos integer -- current position within chunk
DECLARE @DelimPos integer -- position of delimiter
DECLARE @ChunkLen integer -- chunk length
DECLARE @DelimLen integer -- delimiter length
DECLARE @Delimiter varchar(3) -- delimiter
DECLARE @ItemBegPos integer -- item starting position in text
DECLARE @ItemOrder integer -- item order in list

-- create table to hold list items
-- actually their positions because we may want to scrub this list eliminating bad entries before substring is applied
CREATE TABLE #list_items ( item_order integer, item_begpos integer, item_endpos integer )

-- process list
IF @list_integers IS NOT NULL
BEGIN

-- initialize
-- notice that this loop assumes a delimiter length of 1
-- if the delimiter is longer we have to deal with stuff like delimiters straddling the chunk boundaries
SET @InputLen = DATALENGTH(@list_integers)
SET @TextPos = 1
SET @Delimiter = ','
SET @DelimLen = DATALENGTH(@Delimiter)
SET @ItemBegPos = 1
SET @ItemOrder = 1
SET @ChunkLen = 1

-- cycle through input processing chunks
WHILE @TextPos <= @InputLen AND @ChunkLen <> 0
BEGIN

-- get current chunk
SET @Chunk = SUBSTRING(@list_integers, @TextPos, 8000)

-- setup initial variable values
SET @ChunkPos = 1
SET @ChunkLen = DATALENGTH(@Chunk)
SET @DelimPos = CHARINDEX(@Delimiter, @Chunk, @ChunkPos)

-- loop over the chunk, until the last delimiter
WHILE @ChunkPos <= @ChunkLen AND @DelimPos <> 0
BEGIN

-- insert position
INSERT INTO #list_items (item_order, item_begpos, item_endpos)
VALUES (@ItemOrder, @ItemBegPos, (@TextPos + @DelimPos - 1) - 1)

-- adjust positions
SET @ItemOrder = @ItemOrder + 1
SET @ItemBegPos = (@TextPos + @DelimPos - 1) + @DelimLen
SET @ChunkPos = @DelimPos + @DelimLen

-- find next delimiter
SET @DelimPos = CHARINDEX(@Delimiter, @Chunk, @ChunkPos)

END

-- adjust positions
SET @TextPos = @TextPos + @ChunkLen

END

-- handle last item
IF @ItemBegPos <= @InputLen
BEGIN

-- insert position
INSERT INTO #list_items (item_order, item_begpos, item_endpos)
VALUES (@ItemOrder, @ItemBegPos, @InputLen)

END

-- delete the bad items
DELETE FROM #list_items
WHERE item_endpos < item_begpos

-- return list items
SELECT CAST(SUBSTRING(@list_integers, item_begpos, (item_endpos - item_begpos + 1)) AS integer) AS item_integer, item_order, item_begpos, item_endpos
FROM #list_items
WHERE ISNUMERIC(SUBSTRING(@list_integers, item_begpos, (item_endpos - item_begpos + 1))) = 1
ORDER BY item_order

END

DROP TABLE #list_items

RETURN

/* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ */

GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO

Split Delimiter Separated Lists

From:http://www.novicksoftware.com/UDFofWeek/Vol1/T-SQL-UDF-Volume-1-Number-29-udf_Txt_SplitTAB.htm

Creating and parsing lists of items in a string is a frequent
request on SQL Server discussion groups. Storing multiple items
in a list runs counter to relational concepts such as
normalization. However, it's a compact format that most people
find easy to read and that many other programs use for data
exchange.

Here's an example of what I'm talking about. It's a comma-
separated list of the Three Stooges:

'Moe, Larry, Shemp, Curly, Joe, Curly-Joe'

Why 6 stooges? Shemp didn't really like getting poked in the
eye and retired. He was replaced by Curly, Joe, and then
Curly-Joe. They kept on making Stooges shorts into the 1960s.

Once you have a list in a string, it's often necessary to split
it into the individual items. That's the purpose of this
week's UDF. Here's the CREATE FUNCTION script:
/---------- Start copying below this line ----------------------\
SET QUOTED_IDENTIFIER ON
SET ANSI_NULLS ON
GO

CREATE FUNCTION dbo.udf_Txt_SplitTAB (

@sInputList varchar(8000) -- List of delimited items
, @Delimiter char(1) = ',' -- delimiter that separates items
) RETURNS @List TABLE (Item varchar(8000))
WITH SCHEMABINDING
/*
* Returns a table of strings that have been split by a delimiter.
* Similar to the Visual Basic (or VBA) SPLIT function. The
* strings are trimmed before being returned. Null items are not
* returned so if there are multiple separators between items,
* only the non-null items are returned.
* Space is not a valid delimiter.
*
* Example:
select * FROM dbo.udf_Txt_SplitTAB('abcd,123, 456, efh,,hi', ',')
*
* Test:
DECLARE @Count int, @Delim char(10), @Input varchar(128)
SELECT @Count = Count(*)
FROM dbo.udf_Txt_SplitTAB('abcd,123, 456', ',')
PRINT 'TEST 1 3 lines:' + CASE WHEN @Count=3
THEN 'Worked' ELSE 'ERROR' END
SELECT @DELIM=CHAR(10)
, @INPUT = 'Line 1' + @delim + 'line 2' + @Delim
SELECT @Count = Count(*)
FROM dbo.udf_Txt_SplitTAB(@Input, @Delim)
PRINT 'TEST 2 LF :' + CASE WHEN @Count=2
THEN 'Worked' ELSE 'ERROR' END
*
* © Copyright 2003 Andrew Novick http://www.NovickSoftware.com
* You may use this function in any of your SQL Server databases
* including databases that you sell, so long as they contain
* other unrelated database objects. You may not publish this
* UDF either in print or electronically.
* Published in T-SQL UDF of the Week Newsletter Vol 1 #29
http://www.NovickSoftware.com/UDFofWeek/UDFofWeek.htm
***************************************************************/
AS BEGIN

DECLARE @Item Varchar(8000)
DECLARE @Pos int -- Current Starting Position
, @NextPos int -- position of next delimiter
, @LenInput int -- length of input
, @LenNext int -- length of next item
, @DelimLen int -- length of the delimiter

SELECT @Pos = 1
, @DelimLen = LEN(@Delimiter) -- usually 1
, @LenInput = LEN(@sInputList)
, @NextPos = CharIndex(@Delimiter, @sInputList, 1)

-- Doesn't work for space as a delimiter
IF @Delimiter = ' ' BEGIN
INSERT INTO @List
SELECT 'ERROR: Blank is not a valid delimiter'
RETURN
END


-- loop over the input, until the last delimiter.
While @Pos <= @LenInput and @NextPos > 0 BEGIN

IF @NextPos > @Pos BEGIN -- another delimiter found
SET @LenNext = @NextPos - @Pos
Set @Item = LTrim(RTrim(
substring(@sInputList
, @Pos
, @LenNext)
)
)
IF LEN(@Item) > 0
Insert Into @List Select @Item
-- ENDIF

END -- IF

-- Position over the next item
SELECT @Pos = @NextPos + @DelimLen
, @NextPos = CharIndex(@Delimiter
, @sInputList
, @Pos)
END

-- Now there might be one more item left
SET @Item = LTrim(RTrim(
SUBSTRING(@sInputList
, @Pos
, @LenInput-@Pos + 1)
)
)

IF Len(@Item) > 0 -- Put the last item in, if found
INSERT INTO @List SELECT @Item

RETURN
END
GO

GRANT SELECT ON [dbo].[udf_Txt_SplitTAB] to PUBLIC
GO
\---------- Stop copying above this line -----------------------/

Now, lets break up the Stooges:
/---------- Start copying below this line ----------------------\
SELECT Item as [Stooge]
FROM udf_Txt_SplitTAB(
'Moe, Larry, Shemp, Curly, Joe, Curly-Joe'
, ',')
\---------- Stop copying above this line -----------------------/
(Results)
Stooge
----------------------------------------
Moe
Larry
Shemp
Curly
Joe
Curly-Joe

Of course, you can use other separators if you like. However,
udf_Txt_SplitTAB can't use space as a delimiter. Spaces get
special treatment, they're stripped from the ends of the items.
This next statement illustrates:

/---------- Start copying below this line ----------------------\
SELECT '->' + Item + '<-' as [Item]
FROM udf_Txt_SplitTAB(
'1, 22 , 333, 4444 , , 55555'
, ',')
\---------- Stop copying above this line -----------------------/
(Results)
Item
----------------------------------------
->1<-
->22<-
->333<-
->4444<-
->55555<-

Next issue I'll show you how to pull of the reverse trick and
combine multiple strings into a comma separated list.

INFORMATION_SCHEMA: A Map to Your Database

From:http://www.devx.com/getHelpOn/10MinuteSolution/20561

If you've spent any time at all working with databases, you know that the time comes when you scratch your head and ask, "Now what was the name of that table (or some other object)?" Programming SQL Server effectively has always involved knowing how to find the metadata—that is, the information about the structure of your database. In previous versions of SQL Server, you could either use Enterprise Manager, some of the catalog stored procedures (such as sp_help) or even queried the system tables directly. (See my previous 10-Minute Solution, "Using Metadata to Solve Complex Problems," for an example of using the sysobjects table.)

Now in version 7, SQL Server provides another method for obtaining metadata: the INFORMATION_SCHEMA views. In this article, I introduce these views and show you how they can provide critical information about your database.

What Are the INFORMATION_SCHEMA Views?

The INFORMATION_SCHEMA is part of the SQL-92 standard, so you will be able to use your knowledge of these views in other database engines in the future. In SQL Server, these views exist in each database. However, they are not under the "dbo" user. Instead they are under the ownership of INFORMATION_SCHEMA. Therefore, to access any of the views, you need to preface the view name with "INFORMATION_SCHEMA"—that is, select * from INFORMATION_SCHEMA.TABLES. Each of the views in this collection exposes a different aspect of the metadata of the database. In all, there are 17 views.

Translation, Please

When looking at these views, you will find fields with names that don't readily translate into terms that we currently use in SQL Server. Fortunately, the Books Online has created a translation table for us (see Table 1).


















Table 1. Equivalent SQL Server and SQL-92 Names
SQL Server NameSQL-92 Name
database
Catalog

owner

Schema

object

Object

user-defined data type

Domain


The TABLES View

Let's take a closer look at the TABLES view. Table 2 shows the structure of this view.

























Table 2. Structure of TABLES View

Column name

Data type

Description

TABLE_CATALOG

nvarchar(128)

Table qualifier

TABLE_SCHEMA

nvarchar(128)

Table owner

TABLE_NAME

sysname

Table name

TABLE_TYPE

varchar(10)

Type of table; can be VIEW or BASE TABLE


Try selecting from this view. The TABLE_CATALOG contains the name of the database. For most objects, the TABLE_SCHEMA contains "dbo." The INFORMATION_SCHEMA views themselves also appear in this view. However, the TABLE_SCHEMA value for them is INFORMATION_SCHEMA.

Although the name of this view is TABLES, views are also listed. They can be distinguished from tables by the last field, TABLE_TYPE.

SYSOBJECTS versus TABLES View

Of course, you can still obtain a list of all tables in the database by executing the following: select * from sysobjects where type = 'U'. However, there are some major differences between the TABLES view and sysobjects:

  • The TABLES view only displays tables on which the user has permissions. It is a useful way of allowing users to see only what they are supposed to know about.

  • The TABLES view is part of a standard. The sysobjects table is not guaranteed to stay the same.

However, this does not mean that you will never use the sysobjects table anymore. In fact, my previous select from sysobjects points out a crucial difference. By using the "where type = 'U'," I eliminate all system tables from the list. There is no way to differentiate between system tables and user tables in the TABLES view. (However, in the TABLES view, system tables such as sysobjects are listed with a TABLE_TYPE of view. Perhaps at some point system tables will be under a separate owner rather than dbo.)

The TABLES View in Action

Here is a simple example of how the TABLES view can be used to do much more than just list all the tables. When you look at a database for the first time, it is frequently useful to know how many rows there are in each table. The following code uses the TABLES view to build a cursor of each table name. For each table, it then does a select count(*):
if exists(select * from sysobjects where
type = 'U' and name = 'my_table_totals')
drop table my_table_totals
go
create table my_table_totals
(
table_name varchar(50),
totalrecs int null
)
go

declare cr_table_names cursor
for
select table_name
from INFORMATION_SCHEMA.tables
where
TABLE_TYPE = 'BASE TABLE'

declare @table varchar(50) ,@line varchar(500)

open cr_table_names

FETCH NEXT FROM cr_table_names into @table

WHILE @@FETCH_STATUS = 0

BEGIN
select @line ="insert into my_table_totals (table_name, totalrecs)
(" + "select '" + @table + "'" + "," + "(select count(*)
from " + @table + "))"

select @line
exec (@line)

FETCH NEXT FROM cr_table_names into @table

END

CLOSE cr_table_names

DEALLOCATE cr_table_names

Wednesday, March 14, 2007

Changing Compatibility Mode of MSSQL2005

EXEC sp_dbcmptlevel Northwind, 80;

Changing the compatibility mode of database to the earlier version will prevent you from using the new features of MSSQL2005 such as ranking functions, recursive queries, and etc. In order to change back to MSSQL2005 do following:

EXEC sp_dbcmptlevel Northwind, 90;

Outer Join Logical Processing Phases

Outer Join performs
  1. Cartesian Product
  2. ON filter
  3. Adding outer rows.

Outer rows added for rows from the preserved table with no match

Monday, March 5, 2007

Enabling/Disabling xp_cmdshell

In order to use xp_cmdshell in the QA, do the following steps:
  1. sp_configure 'xp_cmdshell','1' -- Set it enabled
  2. reconfigure -- apply the changes in db
Now you can run xp_cmdshell.

If you want to disable it, do followings:
  1. sp_configure 'xp_cmdshell','0' -- Set it disabled
  2. reconfigure -- apply the changes in db

Saturday, February 24, 2007

SQL CASE

CASE expressions bring a vast degree of power and control to SQL Server programmers. A working knowledge of CASE expressions can make accessing and updating SQL Server data easier and perhaps, even more efficient. Additionally, CASE expressions enable more work to be accomplished using a single SQL statement, which should also improve efficiency and decrease development time. As such, CASE expressions should be a component of every SQL Server developer’s arsenal of programming techniques.

select title, price, Budget = CASE
WHEN price > 20.00 THEN 'Expensive'
WHEN price BETWEEN 10.00 AND 19.99 THEN 'Moderate'
WHEN price < 10.00 THEN 'Inexpensive'
ELSE 'Unknown'

END
FROM title


Result
title price budget
-------------------------------------------------------------------------------- --------- -----------
The Busy Executive's Database Guide 19.99 Moderate
Cooking with Computers: Surreptitious Balance Sheets 11.95 Moderate
You Can Combat Computer Stress! 2.99 Inexpensive
Straight Talk About Computers 19.99 Moderate
Silicon Valley Gastronomic Treats 19.99 Moderate
The Gourmet Microwave 2.99 Inexpensive
The Psychology of Computer Cooking NULL Unknown
But Is It User Friendly? 22.95 Expensive
Secrets of Silicon Valley 20.00 Unknown
Net Etiquette NULL Unknown
Computer Phobic AND Non-Phobic Individuals: Behavior Variations 21.59 Expensive
Is Anger the Enemy? 10.95 Moderate
Life Without Fear 7.00 Inexpensive
Prolonged Data Deprivation: Four Case Studies 19.99 Moderate
Emotional Security: A New Algorithm 7.99 Inexpensive
Onions, Leeks, and Garlic: Cooking Secrets of the Mediterranean 20.95 Expensive
Fifty Years in Buckingham Palace Kitchens 11.95 Moderate
Sushi, Anyone? 14.99 Moderate



SQL Statement #2


select top(10) au_lname, au_fname, isSocalMan=
case state when 'CA' then 'Yes, he is.'
else 'No, he is not' end
from authors


au_lname au_fname isSocalMan
---------------------------------------- -------------------- -------------
White Johnson Yes, he is.
Green Marjorie Yes, he is.
Carson Cheryl Yes, he is.
O'Leary Michael Yes, he is.
Straight Dean Yes, he is.
Smith Meander No, he is not
Bennet Abraham Yes, he is.
Dull Ann Yes, he is.
Gringlesby Burt Yes, he is.
Locksley Charlene Yes, he is.
(10 row(s) affected)


SQL Statement #3



use northwind
go
SELECT
EmployeeID,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 1 THEN Freight ELSE 0 END) AS jan,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 2 THEN Freight ELSE 0 END) AS feb,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 3 THEN Freight ELSE 0 END) AS mar,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 4 THEN Freight ELSE 0 END) AS apr,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 5 THEN Freight ELSE 0 END) AS may,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 6 THEN Freight ELSE 0 END) AS jun,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 7 THEN Freight ELSE 0 END) AS jul,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 8 THEN Freight ELSE 0 END) AS aug,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 9 THEN Freight ELSE 0 END) AS sep,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 10 THEN Freight ELSE 0 END) AS oct,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 11 THEN Freight ELSE 0 END) AS nov,
SUM(CASE DATEPART(mm,ShippedDate) WHEN 12 THEN Freight ELSE 0 END) AS dec
FROM orders
GROUP BY EmployeeID
go


EmployeeID jan feb mar apr may jun jul aug sep oct nov dec
------------------------------------------------------------------------------ --------------------- --------------------- --------------------- ---------------------
1 969.88 371.06 856.37 1092.84 599.10 129.89 703.69 1226.63 352.01 720.55 704.70 1035.46
2 907.20 400.65 894.20 2368.82 376.87 545.15 298.73 182.74 430.22 1129.68 278.44 636.96
3 924.92 1644.45 1237.53 772.87 1290.63 1570.36 213.81 295.83 381.62 556.12 773.17 1223.43
4 703.92 2279.89 811.89 960.03 371.34 245.27 594.36 1332.21 818.09 1336.58 371.08 1161.78
5 708.31 621.09 201.94 8.80 237.93 127.34 314.58 10.76 283.91 160.98 328.19 914.88
6 328.57 75.03 372.72 785.08 107.47 260.54 41.59 394.19 261.35 385. 30 321.27 373.61
7 1035.48 436.37 697.20 1416.70 427.74 89.16 108.68 713.51 585.20 528. 22 492.69 33.80
8 838.61 435.37 1278.18 1221.37 619.74 55.72 81.69 828.82 190.84 1018. 74 388.16 429.46
9 234.20 615.34 58.09 945.26 32.33 210.53 294.87 0.00 238.08 520.89 130.44 15.09


CASE has two formats:
The simple CASE function compares an expression to a set of simple expressions to determine the result.
The searched CASE function evaluates a set of Boolean expressions to determine the result.
Both formats support an optional ELSE argument.
Syntax
CASE input_expression
WHEN when_expression THEN result_expression
[ ...n ]
[
ELSE else_result_expression
]
END


CASE
WHEN Boolean_expression THEN result_expression
[ ...n ]
[
ELSE else_result_expression
]
END


Arguments
input_expression
The expression evaluated when you use the simple CASE format. The input_expression argument is any valid expression in Microsoft SQL Server 2005 Mobile Edition (SQL Server Mobile).

WHEN when_expression
A simple expression to which input_expression is compared when you use the simple CASE format. The when_expression argument is any valid SQL Server expression. The data types of input_expression and each when_expression must be the same or must be implicitly converted.

n
A placeholder indicating that multiple WHEN when_expression THEN result_expression clauses, or multiple WHEN Boolean_expression THEN result_expression clauses can be used.

THEN result_expression
The expression returned when input_expression equals when_expression evaluates to TRUE, or Boolean_expression evaluates to TRUE. The result expression argument is any valid SQL Server expression.

ELSE else_result_expression
The expression returned if no comparison operation evaluates to TRUE. If this argument is omitted and no comparison operation evaluates to TRUE, CASE returns NULL. The else_result_expression argument is any valid SQL Server expression. The data types of else_result_expression and any result_expression must be the same or must be an implicit conversion.

WHEN Boolean_expression
The Boolean expression evaluated when you use the searched CASE format. Boolean_expression is any valid Boolean expression.

Result Types
Returns the highest precedence type from the set of types in result_expressions and the optional else_result_expression.

Return Value
Simple CASE function:

  • Evaluates input_expression, and then, in the order specified, evaluates input_expression = when_expression for each WHEN clause.
  • Returns the result_expression of the first (input_expression = when_expression) that evaluates to TRUE.
  • If no input_expression = when_expression evaluates to TRUE, else_result_expression is returned if an ELSE clause is specified, or NULL if no ELSE clause is specified.

Searched CASE function:

  • Evaluates, in the order specified, Boolean_expression for each WHEN clause.
  • Returns result_expression of the first Boolean_expression that evaluates to TRUE.
  • If no Boolean_expression evaluates to TRUE, else_result_expression is returned if an ELSE clause is specified, or NULL if no ELSE clause is specified.

Example
The following example returns the mode of shipping used for orders placed.
SELECT ShipVia, CASE ShipVia
WHEN 1 THEN 'A.Datum'
WHEN 2 THEN 'Contoso'
WHEN 3 THEN 'Consolidated Messenger'
ELSE 'Unknown'
END
FROM Orders


Using CASE Expressions When Modifying Data

UPDATE titles SET price = CASE
WHEN (price < 5.0 AND ytd_sales > 999.99) THEN price * 1.25
WHEN (price < 5.0 AND ytd_sales < 1000.00) THEN price * 1.15
WHEN (price > 4.99 AND ytd_sales > 999.99) THEN price * 1.2
ELSE price

END

The bottom line is that CASE expressions are very powerful, yet often neglected.
Favor using CASE expressions under the following circumstances:

  • when data needs to be converted from one type to another for display and no
    function exists to accomplish the task

  • when a summary row needs to be created from detail data

  • when conditional processing needs to be executed to determine results

  • when using UNION to “glue together” different subsets of a single table


One final usage guideline is to use the COALESCE function with your CASE expressions when you wish to avoid NULLs. Consider an employee table that contains three columns for wages: salary, commission, and retainer_fee. Any single employee can only receive one of these types of wages. So, two of the columns will contain NULLs, but one will always contain a value. The following statement uses the COALESCE function to return only the non-NULL value:


SELECT
COALESCE(salary, commission, retainer_fee, 0)

FROM
employee


The COALESCE function will go through the list of expressions (in this case columns and constants) and return the first non-NULL value encountered. In this case, the numeric constant 0 is added at the end of the list to provide a default value if all of the columns should
happen to be NULL.

Wednesday, February 14, 2007

Not All SQL Server Indexes Are Created Equal



Not All SQL Server Indexes Are Created Equal


by Brad M. McGehee





If you have much experience with indexes at all, you are probably already familiar with the difference between clustered and non-clustered indexes. But this article is not about them. This article is about whether or not the SQL Server Query Optimizer will use your carefully crafted indexes. You may not be aware of this, but just because a column has an index doesn뭪 mean the Query Optimizer will use it. As you can imagine, creating an index that never will be used is a waste of time, and in the worst cases, it can even reduce the performance of your application. Let뭩 learn why.



To start out, let뭩 look at a simple example. Let뭩 assume we have an accounting database. In that database is a table called 뱋rders? Among a number of different columns in this table, we are interested in two columns: 뱋rderid?and 밻mployeeid? This table has 150,000 rows and there is non-clustered index on the 밻mployeeid?table. Now let뭩 say we want to run the following query:



SELECT orderid FROM orders WHERE employeeid = 5



The first thing to notice about the query is that the 밻mployeeid?column used in the WHERE clause of the query has a non-clustered index on it. Because of this, you would most likely assume that when this query is run through the Query Optimizer, that the Query Optimizer would use the index to produce the requested results.



Unfortunately, you can뭪 automatically make this assumption. Just because there is an available index doesn뭪 necessarily mean that the Query Optimizer will use it. This is because the Query Analyzer always evaluates whether or not an index is useful before it is used. If the Query Analyzer examines an index and finds it not useful, it will ignore it, and if necessary, it will perform a table scan to produce the requested results.



So what is a useful index? In order to answer this question, we need to understand that one of the biggest underlying goals of the Query Optimizer is to reduce the amount of I/O, and the corresponding amount of time it takes to perform execute a query. In other words, the Query Optimizers evaluates many different ways a query can be executed, and finds the one it thinks will produce the least amount of I/O. But what may be surprising is that using an available index does not always mean that it will result is the least amount of I/O used. In many cases, especially with non-clustered indexes, a table scan can produce less I/O than an available index.



Before the Query Optimizer uses an index, the Query Optimizer evaluates the index to see if it is selective enough. What does this mean? Selectivity refers to the percentage of rows in a table that are returned by a query. A query is considered highly selective if it returns a very limited number of rows. A query is considered to have low selectivity if it returns a high percentage of rows. Generally speaking, if a query returns
less than 5% of the number of rows in a table, it is considered to have high selectivity, and the index will most likely be used. If the query returns from 5% - 10% of the rows, the index may or may not be used. If the query returns more than 10% of the rows, the index most likely will not be used. And assuming there are no other useful indexes for the query, a table scan will be performed.



Let뭩 return to our example query:



SELECT orderid FROM orders WHERE employeeid = 5



Just by looking at the query we don뭪 know if the available index on employeeid will be used or not. Let뭩 say that we know that of the 150,000 rows in the table, that 밻mployeeid = 5?is true for 5,000 of the records. If we divide 5,000 by 150,000 we get 3.3%. Since 3.3% is less than 5%, the Query Optimizer will most likely use the available index. But what if 밻mployeeid = 5?is true for 25,000 instead. In this case, we divide 25,000 by 150,000 and we get 16.6%. Since 16.6% is greater than 5%, or even 10%, the Query Optimizer most likely will not use the index and instead perform a table scan.



So how can a table scan use less I/O than using an index, such as the non-clustered index in our example? Non-clustered indexes are great if the index is highly selective, especially if you will be returning one record. But if many records will be returned, and the index is not very selective, it is very expensive in I/O to retrieve the data. The reason for this is that the Query Optimizer has to first go to the index to locate the data (using up I/O) and then it has to go to the table to retrieve it (more I/O). At some point, the Query Optimizer determines that it takes less I/O to scan an entire table than it does to go back and forth between the index and the table to retrieve the requested rows.



The example given above applies mostly to non-clustered indexes. If the available index is a clustered index, then the index may be used, even if there is low selectivity, because the index is the table and I/O operations can be very efficient.



So how does the Query Optimizer know if an available index is selective enough to be useful? It does this by maintaining index statistics on each index in every table. Index statistics are a histogram of values that are stored in the sysindexes table. These statistics are a sampling of the available rows that tells the Query Optimizer approximately how selective a particular index is.



Index statistics are created every time an index is created, rebuilt, when the UPDATE STATISTICS command is run, and automatically by the Query Optimizer as the need arises. Index statistics are not maintained in real time because that would put too much overhead on the server. But because index statistics are not real time, they can get out of date, and sometimes, the Query Optimizer can make a poor choice because the index statistics it uses are not current.



But just because statistics are current doesn뭪 mean that the Query Optimizer will use an available index. Remember, the Query Optimizer bases its decision on the selectivity of an index, and the Query Optimizer uses the index statistics to determine selectivity.

So if the Query Optimizer can check to see if a particular index is useful or not, how can we do the same thing? Fortunately, there is a command that let뭩 us examine an index and find out if a particular index is selective enough to be used.



The reason we want to know if an index is selective enough or not is because if it isn뭪, then it won뭪 be used. And if an index won뭪 be used, there is no point in having it. Most likely, dropping unnecessary indexes can boost the performance of your application because indexes, as you probably know, slow down INSERTs, UPDATEs, and DELETEs in a table because of the overhead of maintaining indexes. And if the table in questions is subject to a high level of database changes, index maintenance can be the cause of some bottlenecks. So our goal is to ensure that if we do have indexes, that they are selective enough to be useful. We don뭪 want to maintain indexes that won뭪 be used.



The command we will use to find the selectivity of an index is:



DBCC SHOW_STATISTICS (table_name, index_name)




When this command runs, it produces an output similar to the following. This is a real result based on one of the databases I maintain.



Statistics for INDEX 'in_tran_idx'.



Updated
Rows Rows Sampled Steps Density Average key length

-------------------- ------- ------------ ------ ------------ ------------------

Feb 24 2001 3:36AM 7380901 7163688
300 2.2528611E-5 0.0



(1 row(s) affected)



All density Columns

------------------------

2.2528611E-5 in_tran_key



Steps

-----------

1

283

301

340

371

403

456

?br>
44510



(300 row(s) affected)




DBCC execution completed. If DBCC printed error messages, contact your system administrator.



This result includes a lot of information, most of which is beyond the scope of this article. What we one to focus on is the density value ?.2528611E-5?under the 밃ll density?column heading.



Density refers to the average percentage of duplicate rows in an index. If an indexed column, such as employeeid, has much duplicate data, then the index is said the have high density. But if an indexed column has mostly unique data, then the index is said to have low density.



Density is inversely related to selectivity. If density is a high number, then selectively is low, which means an index may not be used. If density is a low number, then selectivity is high, and an index most likely will be used.



In the sample printout above, the density for the index is less than 1%. In turn, this means the selectivity of the table is over 99%, which means that the index is probably very useful for the Query Optimizer.



If you are an advanced DBA, you will probably have already noticed that I have oversimplified this discussion. Even so, the point I want to make in this article is still very valid, and my point is, is that not all indexes are equal. Just because an index is available to the Query Optimizer does not mean it will always be used.



For DBAs, this means you need to be wary of your table뭩 indexes. As time permits, you may want to run the DBCC SHOW_STATISTICS command and see how selective your indexes actually are. You may find that some of your indexes are not being used. And if this is the case, you may want to consider removing them, which in turn may speed up your application.



For new DBAs, removing, rather than adding indexes may seem a backward way to performance tune your database. But the more you learn about SQL Server works internally, the better you will understand the limits of using indexes to performance tune your applications.



To read about this subject in much greater depth, see chapter 15, 밫he Query Processor?in the book, Inside Microsoft SQL Server 2000, by Kalen Delaney.

Sunday, February 4, 2007

MSSQL Cursors


General Concepts

Declaring a Cursor

  • SQL-92 Syntax

  • Transact-SQL Extended Syntax

  • Opening a Cursor

    Fetching a Cursor

    Closing a Cursor

    Deallocating a Cursor

    Cursor Optimization Tips

    Literature




    General concepts


    In this article, I want to tell you how to create and use server
    side cursors and how you can optimize a cursor performance.

    Cursor is a database object used by applications to manipulate data
    in a set on a row-by-row basis, instead of the typical SQL commands
    that operate on all the rows in the set at one time. For example,
    you can use cursor to include a list of all user databases and make
    multiple operations against each database by passing each database
    name as a variable.

    The server side cursors were first added in SQL Server 6.0 release and
    now supported in all editions of SQL Server 7.0 and SQL Server 2000.

    Before using cursor, you first must declare the cursor. Once a cursor
    has been declared, you can open it and fetch from it. You can fetch
    row by row and make multiple operations on the currently active row
    in the cursor. When you have finished working with a cursor, you
    should close cursor and deallocate it to release SQL Server resources.

    Declaring a Cursor


    Before using cursor, you first must declare the cursor, i.e. define
    its scrolling behavior and the query used to build the result set on
    which the cursor operates. To declare cursor, you can use a syntax
    based on the SQL-92 standard and a syntax using a set of Transact-SQL extensions.

    SQL-92 Syntax


    This is SQL-92 Syntax:


    DECLARE cursor_name [INSENSITIVE] [SCROLL] CURSOR
    FOR select_statement
    [FOR {READ ONLY UPDATE [OF column_name [,...n]]}]


    where

    cursor_name - the name of the server side cursor, must contain
    from 1 to 128 characters.

    INSENSITIVE - specifies that cursor will use a temporary copy of the data
    instead of base tables. This cursor does not allow modifications
    and modifications made to base tables are not reflected in the
    data returned by fetches made to this cursor.

    SCROLL - specifies that cursor can fetch data in all directions, not only
    sequentially until the end of the result set. If this argument is
    not specified, FETCH NEXT is the only fetch option supported.

    select_statement - the standard select statement, cannot contain
    COMPUTE, COMPUTE BY, FOR BROWSE, and INTO keywords.

    READ ONLY - specifies that cursor cannot be updated.

    UPDATE [OF column_name [,...n]] - specifies that all cursor's columns can be
    updated (if OF column_name [,...n] is not
    specified), or only the columns listed in the
    OF column_name [,...n] list allow modifications.

    Cursor Options Compatibility











































    INSENSITIVESCROLLREAD ONLYUPDATE
    INSENSITIVEYesYesNo
    SCROLLYesYesYes
    READ ONLYYesYesNo
    UPDATENoYesNo


    Transact-SQL Extended Syntax


    This is Transact-SQL Extended Syntax:




    DECLARE cursor_name CURSOR
    [LOCAL GLOBAL]
    [FORWARD_ONLY SCROLL]
    [STATIC KEYSET DYNAMIC FAST_FORWARD]
    [READ_ONLY SCROLL_LOCKS OPTIMISTIC]
    [TYPE_WARNING]
    FOR select_statement
    [FOR UPDATE [OF column_name [,...n]]]


    where



    cursor_name - the name of the server side cursor, must contain
    from 1 to 128 characters.

    LOCAL - specifies that cursor can be available only in the batch,
    stored procedure, or trigger in which the cursor was created.
    The LOCAL cursor will be implicitly deallocated when the batch,
    stored procedure, or trigger terminates.

    GLOBAL - specifies that cursor is global to the connection. The GLOBAL
    cursor will be implicitly deallocated at disconnect.

    FORWARD_ONLY - specifies that cursor can only fetch data sequentially
    from the first to the last row. FETCH NEXT is the only
    fetch option supported.

    STATIC - specifies that cursor will use a temporary copy of the data
    instead of base tables. This cursor does not allow modifications
    and modifications made to base tables are not reflected in the
    data returned by fetches made to this cursor.

    KEYSET - specifies that cursor uses the set of keys that uniquely
    identify the cursor's rows (keyset), so that the membership and
    order of rows in the cursor are fixed when the cursor is opened.
    SQL Server uses a table in tempdb to store keyset. The KEYSET
    cursor allows updates nonkey values from being made through
    this cursor, but inserts made by other users are not visible.
    Updates nonkey values made by other users are visible as the
    owner scrolls around the cursor, but updates key values made
    by other users are not visible. If a row is deleted, an attempt
    to fetch the row returns an @@FETCH_STATUS of -2.

    DYNAMIC - specifies that cursor reflects all data changes made to the
    base tables as you scroll around the cursor. FETCH ABSOLUTE
    option is not supported with DYNAMIC cursor.

    FAST_FORWARD - specifies that cursor will be FORWARD_ONLY and READ_ONLY
    cursor. The FAST_FORWARD cursors produce the least amount
    of overhead on SQL Server.

    READ ONLY - specifies that cursor cannot be updated.

    SCROLL_LOCKS - specifies that cursor will lock the rows as they are
    read into the cursor to ensure that positioned updates
    or deletes made through the cursor will be succeed.

    OPTIMISTIC - specifies that cursor does not lock rows as they are
    read into the cursor. So, the positioned updates or
    deletes made through the cursor will not succeed if
    the row has been updated outside the cursor since
    this row was read into the cursor.

    TYPE_WARNING - specifies that if the cursor will be implicitly
    converted from the requested type to another,
    a warning message will be sent to the client.

    select_statement - the standard select statement, cannot contain
    COMPUTE, COMPUTE BY, FOR BROWSE, and INTO keywords.

    UPDATE [OF column_name [,...n]] - specifies that all cursor's columns can be
    updated (if OF column_name [,...n] is not
    specified), or only the columns listed in the
    OF column_name [,...n] list allow modifications.

    Opening a Cursor


    Once a cursor has been declared, you must open it to fetch data from it.
    To open a cursor, you can use the following syntax:


    OPEN { { [GLOBAL] cursor_name } cursor_variable_name}


    where

    GLOBAL - if this argument was not specified and both a global and
    a local cursor exist with the same name, the local cursor
    will be opened; otherwise, the global cursor will be opened.

    cursor_name - the name of the server side cursor, must contain
    from 1 to 128 characters.

    cursor_variable_name - the name of a cursor variable that
    references a cursor.

    After a cursor is opening, you can determine the number of rows
    that were found by the cursor. To get this number, you can use
    @@CURSOR_ROWS scalar function.

    Fetching a Cursor


    Once a cursor has been opened, you can fetch from it row by row and make
    multiple operations on the currently active row in the cursor.
    To fetch from a cursor, you can use the following syntax:


    FETCH
    [ [ NEXT PRIOR FIRST LAST
    ABSOLUTE {n @nvar}
    RELATIVE {n @nvar}
    ]
    FROM
    ]
    { { [GLOBAL] cursor_name } @cursor_variable_name}
    [INTO @variable_name[,...n] ]


    where

    NEXT - the default cursor fetch option. FETCH NEXT returns the
    next row after the current row.

    PRIOR - returns the prior row before the current row.

    FIRST - returns the first row in the cursor.

    LAST - returns the last row in the cursor.

    ABSOLUTE {n @nvar} - returns the nth row in the cursor. If a positive
    number was specified, the rows are counted from the
    top of the data set; if 0 was specified, no rows are
    returned; if a negative number was specified, the
    number of rows will be counted from the bottom of
    the data set.

    RELATIVE {n @nvar} - returns the nth row in the cursor relative to
    the current row. If a positive number was specified,
    returns the nth row beyond the current row; if a
    negative number was specified, returns the nth row
    prior the current row; if 0 was specified, returns
    the current row.

    GLOBAL - if this argument was not specified and both a global and
    a local cursor exist with the same name, the local cursor
    will be fetched; otherwise, the global cursor will be fetched.

    cursor_name - the name of the server side cursor, must contain
    from 1 to 128 characters.

    cursor_variable_name - the name of a cursor variable that
    references a cursor.

    INTO @variable_name[,...n] - allows data returned from the cursor
    to be held in temporary variables. The type
    of variables must match the type of columns
    in the cursor select list or support implicit
    conversion. The number of variables must match
    the number of columns in the cursor select list.

    Closing a Cursor


    When you have finished working with a cursor, you can close it to
    release any resources and locks that SQL Server may have used while
    the cursor was open.

    To close a cursor, you can use the following syntax:


    CLOSE { { [GLOBAL] cursor_name } cursor_variable_name }


    where

    GLOBAL - if this argument was not specified and both a global and
    a local cursor exist with the same name, the local cursor
    will be closed; otherwise, the global cursor will be closed.

    cursor_name - the name of the server side cursor, must contain
    from 1 to 128 characters.


    cursor_variable_name - the name of a cursor variable that
    references a cursor.

    Note. If you have closed a cursor, but have not deallocated it,
    you can open it again when needed.

    Deallocating a Cursor


    When you have finished working with a cursor and want to completely
    release SQL Server resources that were used by a cursor, you can
    deallocate a cursor.

    To deallocate a cursor, you can use the following syntax:


    DEALLOCATE { { [GLOBAL] cursor_name } @cursor_variable_name}


    where

    GLOBAL - if this argument was not specified and both a global and
    a local cursor exist with the same name, the local cursor will be
    deallocated; otherwise, the global cursor will be deallocated.

    cursor_name - the name of the server side cursor, must contain
    from 1 to 128 characters.

    cursor_variable_name - the name of a cursor variable that
    references a cursor.

    Note. Deallocating a cursor completely removes all cursor references.
    So, after a cursor is deallocated, it no longer can be opened.

    Cursor Optimization Tips



    • Try to avoid using SQL Server cursors, whenever possible.

      SQL Server cursors can results in some performance degradation in comparison
      with select statements. Try to use correlated subquery or derived tables, if
      you need to perform row-by-row operations.


    • Do not forget to close SQL Server cursor when its result set is not needed.

      To close SQL Server cursor, you can use CLOSE {cursor_name} command. This command
      releases the cursor result set and frees any cursor locks held on the rows on
      which the cursor is positioned.


    • Do not forget to deallocate SQL Server cursor when the data structures
      comprising the cursor are not needed.


      To deallocate SQL Server cursor, you can use DEALLOCATE {cursor_name} command.
      This command removes a cursor reference and releases the data structures comprising
      the cursor.


    • Try to reduce the number of records to process in the cursor.

      To reduce the cursor result set, use the WHERE clause in the cursor's
      select statement. It can increase cursor performance and reduce SQL Server
      overhead.


    • Try to reduce the number of columns to process in the cursor.

      Include in the cursor's select statement only necessary columns. It will
      reduce the cursor result set. So, the cursor will use fewer resources.
      It can increase cursor performance and reduce SQL Server overhead.


    • Use READ ONLY cursors, whenever possible, instead of updatable cursors.

      Because using cursors can reduce concurrency and lead to unnecessary locking,
      try to use READ ONLY cursors, if you do not need to update cursor result set.


    • Try avoid using insensitive, static and keyset cursors, whenever possible.

      These types of cursor produce the largest amount of overhead on SQL Server,
      because they cause a temporary table to be created in TEMPDB, which results
      in some performance degradation.


    • Use FAST_FORWARD cursors, whenever possible.

      The FAST_FORWARD cursors produce the least amount of overhead on SQL Server,
      because there are read-only cursors and can only be scrolled from the first
      to the last row. Use FAST_FORWARD cursor if you do not need to update cursor
      result set and the FETCH NEXT will be the only used fetch option.


    • Use FORWARD_ONLY cursors, if you need updatable cursor and the FETCH NEXT
      will be the only used fetch option.


      If you need read-only cursor and the FETCH NEXT will be the only used fetch
      option, try to use FAST_FORWARD cursor instead of FORWARD_ONLY cursor.
      By the way, if one of the FAST_FORWARD or FORWARD_ONLY is specified the
      other cannot be specified.


    Literature


    1. DECLARE CURSOR

    2. OPEN

    3. FETCH

    4. http://msdn.microsoft.com/library/en-us/tsqlref/ts_globals_1c1f.asp

    5. CLOSE

    6. DEALLOCATE