Search Results

Search found 2518 results on 101 pages for 'declare'.

Page 11/101 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • sql 2008 sqldmo alternative

    - by alexdelpiero
    Hi! I previously was using sqldmo to automatically generate scripts from the databse. Now I upgraded to sql server 2008 and I don’t want to use this feature anymore since Microsoft will be dropping this feature off. Is there any other alternative I can use to connect to a server and generate scripts automatically from a database? Any answer is welcome. Thanks in advance. This is the procedure i was previously using: CREATE PROC GenerateSP ( @server varchar(30) = null, @uname varchar(30) = null, @pwd varchar(30) = null, @dbname varchar(30) = null, @filename varchar(200) = 'c:\script.sql' ) AS DECLARE @object int DECLARE @hr int DECLARE @return varchar(200) DECLARE @exec_str varchar(2000) DECLARE @spname sysname SET NOCOUNT ON -- Sets the server to the local server IF @server is NULL SELECT @server = @@servername -- Sets the database to the current database IF @dbname is NULL SELECT @dbname = db_name() -- Sets the username to the current user name IF @uname is NULL SELECT @uname = SYSTEM_USER -- Create an object that points to the SQL Server EXEC @hr = sp_OACreate 'SQLDMO.SQLServer', @object OUT IF @hr < 0 BEGIN PRINT 'error create SQLOLE.SQLServer' RETURN END -- Connect to the SQL Server IF @pwd is NULL BEGIN EXEC @hr = sp_OAMethod @object, 'Connect', NULL, @server, @uname IF @hr < 0 BEGIN PRINT 'error Connect' RETURN END END ELSE BEGIN EXEC @hr = sp_OAMethod @object, 'Connect', NULL, @server, @uname, @pwd IF @hr < 0 BEGIN PRINT 'error Connect' RETURN END END --Verify the connection EXEC @hr = sp_OAMethod @object, 'VerifyConnection', @return OUT IF @hr < 0 BEGIN PRINT 'error VerifyConnection' RETURN END SET @exec_str = 'DECLARE script_cursor CURSOR FOR SELECT name FROM ' + @dbname + '..sysobjects WHERE type = ''P'' ORDER BY Name' EXEC (@exec_str) OPEN script_cursor FETCH NEXT FROM script_cursor INTO @spname WHILE (@@fetch_status < -1) BEGIN SET @exec_str = 'Databases("'+ @dbname +'").StoredProcedures("'+RTRIM(UPPER(@spname))+'").Script(74077,"'+ @filename +'")' EXEC @hr = sp_OAMethod @object, @exec_str, @return OUT IF @hr < 0 BEGIN PRINT 'error Script' RETURN END FETCH NEXT FROM script_cursor INTO @spname END CLOSE script_cursor DEALLOCATE script_cursor -- Destroy the object EXEC @hr = sp_OADestroy @object IF @hr < 0 BEGIN PRINT 'error destroy object' RETURN END GO

    Read the article

  • SQL Server 2008 alternative for SQL-DMO

    - by alexdelpiero
    Hi! I previously was using SQL-DMO to automatically generate scripts from the database. Now I upgraded to SQL Server 2008 and I don’t want to use this feature anymore since Microsoft will be dropping this feature off. Is there any other alternative I can use to connect to a server and generate scripts automatically from a database? Any answer is welcome. Thanks in advance. This is the procedure i was previously using: CREATE PROC GenerateSP ( @server varchar(30) = null, @uname varchar(30) = null, @pwd varchar(30) = null, @dbname varchar(30) = null, @filename varchar(200) = 'c:\script.sql' ) AS DECLARE @object int DECLARE @hr int DECLARE @return varchar(200) DECLARE @exec_str varchar(2000) DECLARE @spname sysname SET NOCOUNT ON -- Sets the server to the local server IF @server is NULL SELECT @server = @@servername -- Sets the database to the current database IF @dbname is NULL SELECT @dbname = db_name() -- Sets the username to the current user name IF @uname is NULL SELECT @uname = SYSTEM_USER -- Create an object that points to the SQL Server EXEC @hr = sp_OACreate 'SQLDMO.SQLServer', @object OUT IF @hr <> 0 BEGIN PRINT 'error create SQLOLE.SQLServer' RETURN END -- Connect to the SQL Server IF @pwd is NULL BEGIN EXEC @hr = sp_OAMethod @object, 'Connect', NULL, @server, @uname IF @hr <> 0 BEGIN PRINT 'error Connect' RETURN END END ELSE BEGIN EXEC @hr = sp_OAMethod @object, 'Connect', NULL, @server, @uname, @pwd IF @hr <> 0 BEGIN PRINT 'error Connect' RETURN END END --Verify the connection EXEC @hr = sp_OAMethod @object, 'VerifyConnection', @return OUT IF @hr <> 0 BEGIN PRINT 'error VerifyConnection' RETURN END SET @exec_str = 'DECLARE script_cursor CURSOR FOR SELECT name FROM ' + @dbname + '..sysobjects WHERE type = ''P'' ORDER BY Name' EXEC (@exec_str) OPEN script_cursor FETCH NEXT FROM script_cursor INTO @spname WHILE (@@fetch_status <> -1) BEGIN SET @exec_str = 'Databases("'+ @dbname +'").StoredProcedures("'+RTRIM(UPPER(@spname))+'").Script(74077,"'+ @filename +'")' EXEC @hr = sp_OAMethod @object, @exec_str, @return OUT IF @hr <> 0 BEGIN PRINT 'error Script' RETURN END FETCH NEXT FROM script_cursor INTO @spname END CLOSE script_cursor DEALLOCATE script_cursor -- Destroy the object EXEC @hr = sp_OADestroy @object IF @hr <> 0 BEGIN PRINT 'error destroy object' RETURN END GO

    Read the article

  • Best approach, Dynamic OpenXML in T-SQL

    - by Martin Ongtangco
    hello, i'm storing XML values to an entry in my database. Originally, i extract the xml datatype to my business logic then fill the XML data into a DataSet. I want to improve this process by loading the XML right into the T-SQL. Instead of getting the xml as string then converting it on the BL. My issue is this: each xml entry is dynamic, meaning it can be any column created by the user. I tried using this approach, but it's giving me an error: CREATE PROCEDURE spXMLtoDataSet @id uniqueidentifier, @columns varchar(max) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; DECLARE @name varchar(300); DECLARE @i int; DECLARE @xmlData xml; (SELECT @xmlData = data, @name = name FROM XmlTABLES WHERE (tableID = ISNULL(@id, tableID))); EXEC sp_xml_preparedocument @i OUTPUT, @xmlData DECLARE @tag varchar(1000); SET @tag = '/NewDataSet/' + @name; DECLARE @statement varchar(max) SET @statement = 'SELECT * FROM OpenXML(@i, @tag, 2) WITH (' + @columns + ')'; EXEC (@statement); EXEC sp_xml_removedocument @i END where i pass a dynamically written @columns. For example: spXMLtoDataSet 'bda32dd7-0439-4f97-bc96-50cdacbb1518', 'ID int, TypeOfAccident int, Major bit, Number_of_Persons int, Notes varchar(max)' but it kept on throwing me this exception: Msg 137, Level 15, State 2, Line 1 Must declare the scalar variable "@i". Msg 319, Level 15, State 1, Line 1 Incorrect syntax near the keyword 'with'. If this statement is a common table expression or an xmlnamespaces clause, the previous statement must be terminated with a semicolon.

    Read the article

  • Can I spead out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • Newbie T-SQL dynamic stored procedure -- how can I improve it?

    - by Andy Jones
    I'm new to T-SQL; all my experience is in a completely different database environment (Openedge). I've learned enough to write the procedure below -- but also enough to know that I don't know enough! This routine will have to go into a live environment soon, and it works, but I'm quite certain there are a number of c**k-ups and gotchas in it that I know nothing about. The routine copies data from table A to table B, replacing the data in table B. The tables could be in any database. I plan to call this routine multiple times from another stored procedure. Permissions aren't a problem: the routine will be run by the dba as a timed job. Could I have your suggestions as to how to make it fit best-practice? To bullet-proof it? ALTER PROCEDURE [dbo].[copyTable2Table] @sdb varchar(30), @stable varchar(30), @tdb varchar(30), @ttable varchar(30), @raiseerror bit = 1, @debug bit = 0 as begin set nocount on declare @source varchar(65) declare @target varchar(65) declare @dropstmt varchar(100) declare @insstmt varchar(100) declare @ErrMsg nvarchar(4000) declare @ErrSeverity int set @source = '[' + @sdb + '].[dbo].[' + @stable + ']' set @target = '[' + @tdb + '].[dbo].[' + @ttable + ']' set @dropStmt = 'drop table ' + @target set @insStmt = 'select * into ' + @target + ' from ' + @source set @errMsg = '' set @errSeverity = 0 if @debug = 1 print('Drop:' + @dropStmt + ' Insert:' + @insStmt) -- drop the target table, copy the source table to the target begin try begin transaction exec(@dropStmt) exec(@insStmt) commit end try begin catch if @@trancount > 0 rollback select @errMsg = error_message(), @errSeverity = error_severity() end catch -- update the log table insert into HHG_system.dbo.copyaudit (copytime, copyuser, source, target, errmsg, errseverity) values( getdate(), user_name(user_id()), @source, @target, @errMsg, @errSeverity) if @debug = 1 print ( 'Message:' + @errMsg + ' Severity:' + convert(Char, @errSeverity) ) -- handle errors, return value if @errMsg <> '' begin if @raiseError = 1 raiserror(@errMsg, @errSeverity, 1) return 1 end return 0 END Thanks!

    Read the article

  • Can I spread out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • Can I spread out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • SQL Server Reset Identity Increment for all tables

    - by DanSpd
    Basically I need to reset Identity Increment for all tables to its original. Here I tried some code, but it fails. http://pastebin.com/KSyvtK5b actual code from link: USE World00_Character GO -- Create a cursor to loop through the System Ojects and get each table name DECLARE TBL_CURSOR CURSOR -- Declare the SQL Statement to cursor through FOR ( SELECT Name FROM Sysobjects WHERE Type='U' ) -- Declare the @SQL Variable which will hold our dynamic sql DECLARE @SQL NVARCHAR(MAX); SET @SQL = ''; -- Declare the @TblName Variable which will hold the name of the current table DECLARE @TblName NVARCHAR(MAX); -- Open the Cursor OPEN TBL_CURSOR -- Setup the Fetch While that will loop through our cursor and set @TblName FETCH NEXT FROM TBL_CURSOR INTO @TblName -- Do this while we are not at the end of the record set WHILE (@@FETCH_STATUS <> -1) BEGIN -- Appeand this table's select count statement to our sql variable SET @SQL = @SQL + ' ( SELECT '''+@TblName+''' AS Table_Name,COUNT(*) AS Count FROM '+@TblName+' ) UNION'; -- Delete info EXEC('DBCC CHECKIDENT ('+@TblName+',RESEED,(SELECT IDENT_SEED('+@TblName+')))'); -- Pull the next record FETCH NEXT FROM TBL_CURSOR INTO @TblName -- End the Cursor Loop END -- Close and Clean Up the Cursor CLOSE TBL_CURSOR DEALLOCATE TBL_CURSOR -- Since we were adding the UNION at the end of each part, the last query will have -- an extra UNION. Lets trim it off. SET @SQL = LEFT(@SQL,LEN(@SQL)-6); -- Lets do an Order By. You can pick between Count and Table Name by picking which -- line to execute below. SET @SQL = @SQL + ' ORDER BY Count'; --SET @SQL = @SQL + ' ORDER BY Table_Name'; -- Now that our Dynamic SQL statement is ready, lets execute it. EXEC (@SQL); GO error message: Error: Msg 102, Level 15, State 1, Line 1 Incorrect syntax near '('. How can I either fix that SQL or reset identity for all tables to its original? Thank you

    Read the article

  • MS SQL : Can you help me with this query?

    - by rlb.usa
    I want to run a diagnostic report on our MS SQL 2008 database server. I am looping through all of the databases, and then for each database, I want to look at each table. But, when I go to look at each table (with tbl_cursor), it always picks up the tables in the database 'master'. I think it's because of my tbl_cursor selection : SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' How do I fix this? Here's the entire code: SET NOCOUNT ON DECLARE @table_count INT DECLARE @db_cursor VARCHAR(100) DECLARE database_cursor CURSOR FOR SELECT name FROM sys.databases where name<>N'master' OPEN database_cursor FETCH NEXT FROM database_cursor INTO @db_cursor WHILE @@Fetch_status = 0 BEGIN PRINT @db_cursor SET @table_count = 0 DECLARE @table_cursor VARCHAR(100) DECLARE tbl_cursor CURSOR FOR SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' OPEN tbl_cursor FETCH NEXT FROM tbl_cursor INTO @table_cursor WHILE @@Fetch_status = 0 BEGIN DECLARE @table_cmd NVARCHAR(255) SET @table_cmd = N'IF NOT EXISTS( SELECT TOP(1) * FROM ' + @table_cursor + ') PRINT N'' Table ''''' + @table_cursor + ''''' is empty'' ' --PRINT @table_cmd --debug EXEC sp_executesql @table_cmd SET @table_count = @table_count + 1 FETCH NEXT FROM tbl_cursor INTO @table_cursor END CLOSE tbl_cursor DEALLOCATE tbl_cursor PRINT @db_cursor + N' Total Tables : ' + CAST( @table_count as varchar(2) ) PRINT N'' -- print another blank line SET @table_count = 0 FETCH NEXT FROM database_cursor INTO @db_cursor END CLOSE database_cursor DEALLOCATE database_cursor SET NOCOUNT OFF

    Read the article

  • convert MsSql StoredPorcedure to MySql

    - by karthik
    I need to covert the following SP of MsSql To MySql. I am new to MySql.. Help needed. CREATE PROC InsertGenerator (@tableName varchar(100)) as --Declare a cursor to retrieve column specific information --for the specified table DECLARE cursCol CURSOR FAST_FORWARD FOR SELECT column_name,data_type FROM information_schema.columns WHERE table_name = @tableName OPEN cursCol DECLARE @string nvarchar(3000) --for storing the first half --of INSERT statement DECLARE @stringData nvarchar(3000) --for storing the data --(VALUES) related statement DECLARE @dataType nvarchar(1000) --data types returned --for respective columns SET @string='INSERT '+@tableName+'(' SET @stringData='' DECLARE @colName nvarchar(50) FETCH NEXT FROM cursCol INTO @colName,@dataType IF @@fetch_status<0 begin print 'Table '+@tableName+' not found, processing skipped.' close curscol deallocate curscol return END WHILE @@FETCH_STATUS=0 BEGIN IF @dataType in ('varchar','char','nchar','nvarchar') BEGIN SET @stringData=@stringData+'''''''''+ isnull('+@colName+','''')+'''''',''+' END ELSE if @dataType in ('text','ntext') --if the datatype --is text or something else BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast('+@colName+' as varchar(2000)),'''')+'''''',''+' END ELSE IF @dataType = 'money' --because money doesn't get converted --from varchar implicitly BEGIN SET @stringData=@stringData+'''convert(money,''''''+ isnull(cast('+@colName+' as varchar(200)),''0.0000'')+''''''),''+' END ELSE IF @dataType='datetime' BEGIN SET @stringData=@stringData+'''convert(datetime,''''''+ isnull(cast('+@colName+' as varchar(200)),''0'')+''''''),''+' END ELSE IF @dataType='image' BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast(convert(varbinary,'+@colName+') as varchar(6)),''0'')+'''''',''+' END ELSE --presuming the data type is int,bit,numeric,decimal BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast('+@colName+' as varchar(200)),''0'')+'''''',''+' END SET @string=@string+@colName+',' FETCH NEXT FROM cursCol INTO @colName,@dataType END

    Read the article

  • mysql dynamic cursor

    - by machaa
    Here is the procedure I wrote- Cursors c1 & c2. c2 is inside c1, I tried declaring c2 below c1 (outside the c1 cursor) but then I is NOT taking the updated value :( Any suggestions to make it working would be helpful, Thanks create table t1(i int); create table t2(i int, j int); insert into t1(i) values(1), (2), (3), (4), (5); insert into t2(i, j) values(1, 6), (2, 7), (3, 8), (4, 9), (5, 10); delimiter $ CREATE PROCEDURE p1() BEGIN DECLARE I INT; DECLARE J INT; DECLARE done INT DEFAULT 0; DECLARE c1 CURSOR FOR SELECT i FROM t1; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1; OPEN c1; REPEAT FETCH c1 INTO I; IF NOT done THEN select I; DECLARE c2 CURSOR FOR SELECT j FROM t2 WHERE i = I; OPEN c2; REPEAT FETCH c2 into J; IF NOT done THEN SELECT J; END IF; UNTIL done END REPEAT; CLOSE c2; set done = 0; END IF; UNTIL done END REPEAT; CLOSE c1; END$ delimiter ;

    Read the article

  • Stored procedure using cursor in mySql.

    - by RAVI
    I wrote a stored procedure using cursor in mysql but that procedure is taking 10 second to fetch the result while that result set have only 450 records so, I want to know that why that proedure is taking that much time to fetch tha record. procedure as below: DELIMITER // DROP PROCEDURE IF EXISTS curdemo123// CREATE PROCEDURE curdemo123(IN Branchcode int,IN vYear int,IN vMonth int) BEGIN DECLARE EndOfData,tempamount INT DEFAULT 0; DECLARE tempagent_code,tempplantype,tempsaledate CHAR(12); DECLARE tempspot_rate DOUBLE; DECLARE var1,totalrow INT DEFAULT 1; DECLARE cur1 CURSOR FOR select SQL_CALC_FOUND_ROWS ad.agentCode,ad.planType,ad.amount,ad.date from adplan_detailstbl ad where ad.branchCode=Branchcode and (ad.date between '2009-12-1' and '2009-12-31')order by ad.NUM_ID asc; DECLARE CONTINUE HANDLER FOR SQLSTATE '02000' SET EndOfData = 1; DROP TEMPORARY TABLE IF EXISTS temptable; CREATE TEMPORARY TABLE temptable (agent_code varchar(15), plan_type char(12),sale double,spot_rate double default '0.0', dATE DATE); OPEN cur1; SET totalrow=FOUND_ROWS(); while var1 <= totalrow DO fetch cur1 into tempagent_code,tempplantype,tempamount,tempsaledate; IF((tempplantype='Unit Plan' OR tempplantype='MIP') OR tempplantype='STUP') then select spotRate into tempspot_rate from spot_amount where ((monthCode=vMonth and year=vYear) and ((agentCode=tempagent_code and branchCode=Branchcode) and (planType=tempplantype))); INSERT INTO temptable VALUES(tempagent_code,tempplantype,tempamount,tempspot_rate,tempsaledate); else INSERT INTO temptable(agent_code,plan_type,sale,dATE) VALUES(tempagent_code,tempplantype,tempamount,tempsaledate); END IF; SET var1=var1+1; END WHILE; CLOSE cur1; select * from temptable; DROP TABLE temptable; END // DELIMITER ;

    Read the article

  • SQL Server: Can you help me with this query?

    - by rlb.usa
    I want to run a diagnostic report on our MS SQL 2008 database server. I am looping through all of the databases, and then for each database, I want to look at each table. But, when I go to look at each table (with tbl_cursor), it always picks up the tables in the database 'master'. I think it's because of my tbl_cursor selection : SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' How do I fix this? Here's the entire code: SET NOCOUNT ON DECLARE @table_count INT DECLARE @db_cursor VARCHAR(100) DECLARE database_cursor CURSOR FOR SELECT name FROM sys.databases where name<>N'master' OPEN database_cursor FETCH NEXT FROM database_cursor INTO @db_cursor WHILE @@Fetch_status = 0 BEGIN PRINT @db_cursor SET @table_count = 0 DECLARE @table_cursor VARCHAR(100) DECLARE tbl_cursor CURSOR FOR SELECT table_name FROM information_schema.tables WHERE table_type = 'base table' OPEN tbl_cursor FETCH NEXT FROM tbl_cursor INTO @table_cursor WHILE @@Fetch_status = 0 BEGIN DECLARE @table_cmd NVARCHAR(255) SET @table_cmd = N'IF NOT EXISTS( SELECT TOP(1) * FROM ' + @table_cursor + ') PRINT N'' Table ''''' + @table_cursor + ''''' is empty'' ' --PRINT @table_cmd --debug EXEC sp_executesql @table_cmd SET @table_count = @table_count + 1 FETCH NEXT FROM tbl_cursor INTO @table_cursor END CLOSE tbl_cursor DEALLOCATE tbl_cursor PRINT @db_cursor + N' Total Tables : ' + CAST( @table_count as varchar(2) ) PRINT N'' -- print another blank line SET @table_count = 0 FETCH NEXT FROM database_cursor INTO @db_cursor END CLOSE database_cursor DEALLOCATE database_cursor SET NOCOUNT OFF

    Read the article

  • Why does this simple MySQL procedure take way too long to complete?

    - by Howard Guo
    This is a very simple MySQL stored procedure. Cursor "commission" has only 3000 records, but the procedure call takes more than 30 seconds to run. Why is that? DELIMITER // DROP PROCEDURE IF EXISTS apply_credit// CREATE PROCEDURE apply_credit() BEGIN DECLARE done tinyint DEFAULT 0; DECLARE _pk_id INT; DECLARE _eid, _source VARCHAR(255); DECLARE _lh_revenue, _acc_revenue, _project_carrier_expense, _carrier_lh, _carrier_acc, _gross_margin, _fsc_revenue, _revenue, _load_count DECIMAL; DECLARE commission CURSOR FOR SELECT pk_id, eid, source, lh_revenue, acc_revenue, project_carrier_expense, carrier_lh, carrier_acc, gross_margin, fsc_revenue, revenue, load_count FROM ct_sales_commission; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1; DELETE FROM debug; OPEN commission; REPEAT FETCH commission INTO _pk_id, _eid, _source, _lh_revenue, _acc_revenue, _project_carrier_expense, _carrier_lh, _carrier_acc, _gross_margin, _fsc_revenue, _revenue, _load_count; INSERT INTO debug VALUES(concat('row ', _pk_id)); UNTIL done = 1 END REPEAT; CLOSE commission; END// DELIMITER ; CALL apply_credit(); SELECT * FROM debug;

    Read the article

  • convert SQL Server StoredPorcedure to MySql

    - by karthik
    I need to covert the following SP of SQL Server To MySql. I am new to MySql.. Help needed. CREATE PROC InsertGenerator (@tableName varchar(100)) as --Declare a cursor to retrieve column specific information --for the specified table DECLARE cursCol CURSOR FAST_FORWARD FOR SELECT column_name,data_type FROM information_schema.columns WHERE table_name = @tableName OPEN cursCol DECLARE @string nvarchar(3000) --for storing the first half --of INSERT statement DECLARE @stringData nvarchar(3000) --for storing the data --(VALUES) related statement DECLARE @dataType nvarchar(1000) --data types returned --for respective columns SET @string='INSERT '+@tableName+'(' SET @stringData='' DECLARE @colName nvarchar(50) FETCH NEXT FROM cursCol INTO @colName,@dataType IF @@fetch_status<>0 begin print 'Table '+@tableName+' not found, processing skipped.' close curscol deallocate curscol return END WHILE @@FETCH_STATUS=0 BEGIN IF @dataType in ('varchar','char','nchar','nvarchar') BEGIN SET @stringData=@stringData+'''''''''+ isnull('+@colName+','''')+'''''',''+' END ELSE if @dataType in ('text','ntext') --if the datatype --is text or something else BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast('+@colName+' as varchar(2000)),'''')+'''''',''+' END ELSE IF @dataType = 'money' --because money doesn't get converted --from varchar implicitly BEGIN SET @stringData=@stringData+'''convert(money,''''''+ isnull(cast('+@colName+' as varchar(200)),''0.0000'')+''''''),''+' END ELSE IF @dataType='datetime' BEGIN SET @stringData=@stringData+'''convert(datetime,''''''+ isnull(cast('+@colName+' as varchar(200)),''0'')+''''''),''+' END ELSE IF @dataType='image' BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast(convert(varbinary,'+@colName+') as varchar(6)),''0'')+'''''',''+' END ELSE --presuming the data type is int,bit,numeric,decimal BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast('+@colName+' as varchar(200)),''0'')+'''''',''+' END SET @string=@string+@colName+',' FETCH NEXT FROM cursCol INTO @colName,@dataType END

    Read the article

  • Round time to 5 minute nearest SQL Server

    - by Drako
    i don't know if it can be usefull to somebody but I went crazy looking for a solution and ended up doing it myself. Here is a function that (according to a date passed as parameter), returns the same date and approximate time to the nearest multiple of 5. It is a slow query, so if anyone has a better solution, it is welcome. A greeting. CREATE FUNCTION [dbo].[RoundTime] (@Time DATETIME) RETURNS DATETIME AS BEGIN DECLARE @min nvarchar(50) DECLARE @val int DECLARE @hour int DECLARE @temp int DECLARE @day datetime DECLARE @date datetime SET @date = CONVERT(DATETIME, @Time, 120) SET @day = (select DATEADD(dd, 0, DATEDIFF(dd, 0, @date))) SET @hour = (select datepart(hour,@date)) SET @min = (select datepart(minute,@date)) IF LEN(@min) > 1 BEGIN SET @val = CAST(substring(@min, 2, 1) as int) END else BEGIN SET @val = CAST(substring(@min, 1, 1) as int) END IF @val <= 2 BEGIN SET @val = CAST(CAST(@min as int) - @val as int) END else BEGIN IF (@val <> 5) BEGIN SET @temp = 5 - CAST(@min%5 as int) SET @val = CAST(CAST(@min as int) + @temp as int) END IF (@val = 60) BEGIN SET @val = 0 SET @hour = @hour + 1 END IF (@hour = 24) BEGIN SET @day = DATEADD(day,1,@day) SET @hour = 0 SET @min = 0 END END RETURN CONVERT(datetime, CAST(DATEPART(YYYY, @day) as nvarchar) + '-' + CAST(DATEPART(MM, @day) as nvarchar) + '-' + CAST(DATEPART(dd, @day) as nvarchar) + ' ' + CAST(@hour as nvarchar) + ':' + CAST(@val as nvarchar), 120) END

    Read the article

  • SQL Server 2008 Compression

    - by Peter Larsson
    Hi! Today I am going to talk about compression in SQL Server 2008. The data warehouse I currently design and develop holds historical data back to 1973. The data warehouse will have an other blog post laster due to it's complexity. However, the server has 60GB of memory (of which 48 is dedicated to SQL Server service), so all data didn't fit in memory and the SAN is not the fastest one around. So I decided to give compression a go, since we use Enterprise Edition anyway. This is the code I use to compress all tables with PAGE compression. DECLARE @SQL VARCHAR(MAX)   DECLARE curTables CURSOR FOR             SELECT 'ALTER TABLE ' + QUOTENAME(OBJECT_SCHEMA_NAME(object_id))                     + '.' + QUOTENAME(OBJECT_NAME(object_id))                     + ' REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE)'             FROM    sys.tables   OPEN    curTables   FETCH   NEXT FROM    curTables INTO    @SQL   WHILE @@FETCH_STATUS = 0     BEGIN         IF @SQL IS NOT NULL             RAISERROR(@SQL, 10, 1) WITH NOWAIT           FETCH   NEXT         FROM    curTables         INTO    @SQL     END   CLOSE       curTables DEALLOCATE  curTables Copy and paste the result to a new code window and execute the statements. One thing I noticed when doing this, is that the database grows with the same size as the table. If the database cannot grow this size, the operation fails. For me, I first ended up with orphaned connection. Not good. And this is the code I use to create the index compression statements DECLARE @SQL VARCHAR(MAX)   DECLARE curIndexes CURSOR FOR             SELECT      'ALTER INDEX ' + QUOTENAME(name)                         + ' ON '                         + QUOTENAME(OBJECT_SCHEMA_NAME(object_id))                         + '.'                         + QUOTENAME(OBJECT_NAME(object_id))                         + ' REBUILD PARTITION = ALL WITH (FILLFACTOR = 100, DATA_COMPRESSION = PAGE)'             FROM        sys.indexes             WHERE       OBJECTPROPERTY(object_id, 'IsMSShipped') = 0                         AND OBJECTPROPERTY(object_id, 'IsTable') = 1             ORDER BY    CASE type_desc                             WHEN 'CLUSTERED' THEN 1                             ELSE 2                         END   OPEN    curIndexes   FETCH   NEXT FROM    curIndexes INTO    @SQL   WHILE @@FETCH_STATUS = 0     BEGIN         IF @SQL IS NOT NULL             RAISERROR(@SQL, 10, 1) WITH NOWAIT           FETCH   NEXT         FROM    curIndexes         INTO    @SQL     END   CLOSE       curIndexes DEALLOCATE  curIndexes When this was done, I noticed that the 90GB database now only was 17GB. And most important, complete database now could reside in memory! After this I took care of the administrative tasks, backups. Here I copied the code from Management Studio because I didn't want to give too much time for this. The code looks like (notice the compression option). BACKUP DATABASE [Yoda] TO              DISK = N'D:\Fileshare\Backup\Yoda.bak' WITH            NOFORMAT,                 INIT,                 NAME = N'Yoda - Full Database Backup',                 SKIP,                 NOREWIND,                 NOUNLOAD,                 COMPRESSION,                 STATS = 10,                 CHECKSUM GO   DECLARE @BackupSetID INT   SELECT  @BackupSetID = Position FROM    msdb..backupset WHERE   database_name = N'Yoda'         AND backup_set_id =(SELECT MAX(backup_set_id) FROM msdb..backupset WHERE database_name = N'Yoda')   IF @BackupSetID IS NULL     RAISERROR(N'Verify failed. Backup information for database ''Yoda'' not found.', 16, 1)   RESTORE VERIFYONLY FROM    DISK = N'D:\Fileshare\Backup\Yoda.bak' WITH    FILE = @BackupSetID,         NOUNLOAD,         NOREWIND GO After running the backup, the file size was even more reduced due to the zip-like compression algorithm used in SQL Server 2008. The file size? Only 9 GB. //Peso

    Read the article

  • Finding the XPath with the node name

    - by julien.schneider(at)oracle.com
    A function that i find missing is to get the Xpath expression of a node. For example, suppose i only know the node name <theNode>, i'd like to get its complete path /Where/is/theNode.   Using this rather simple Xquery you can easily get the path to your node. declare namespace orcl = "http://www.oracle.com/weblogic_soa_and_more"; declare function orcl:findXpath($path as element()*) as xs:string { if(local-name($path/..)='') then local-name($path) else concat(orcl:findXpath($path/..),'/',local-name($path)) }; declare function orcl:PathFinder($inputRecord as element(), $path as element()) as element(*) { { for $index in $inputRecord//*[local-name()=$path/text()] return orcl:findXpath($index) } }; declare variable $inputRecord as element() external; declare variable $path as element() external; orcl:PathFinder($inputRecord, $path)   With a path         <myNode>nodeName</myNode>  and a message         <node1><node2><nodeName>test</nodeName></node2></node1>  the result will be         node1/node2/nodeName   This is particularly useful when you use the Validate action of OSB because Validate only returns the xml node which is in error and not the full location itself. The following OSB project reuses this Xquery to reformat the result of the Validate Action. Just send an invalid xml like <myElem http://blogs.oracle.com/weblogic_soa_and_more"http://blogs.oracle.com/weblogic_soa_and_more">      <mySubElem>      </mySubElem></myElem>   you'll get as nice <MessageIsNotValid> <ErrorDetail  nbr="1"> <dataElementhPath>Body/myElem/mySubElem</dataElementhPath> <message> Expected element 'Subelem1@http://blogs.oracle.com/weblogic_soa_and_more' before the end of the content in element mySubElem@http://blogs.oracle.com/weblogic_soa_and_more </message> </ErrorDetail> </MessageIsNotValid>   Download the OSB project : sbconfig_xpath.jar   Enjoy.            

    Read the article

  • INNER JOIN code calculated value with SELECT statement

    - by sp-1986
    I have the following stored procedure which will generate mon to sun and then creates a temp table with a series of 'weeks' (start and end weeks) : USE [test_staff] GO /****** Object: StoredProcedure [dbo].[sp_timesheets_all_staff_by_week_by_job_grouping_by_site] Script Date: 03/21/2012 09:04:49 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE PROCEDURE [dbo].[sp_timesheets_all_staff_by_week_by_job_grouping_by_site] ( @grouping_ref int, @week_ref int ) AS CREATE TABLE #WeeklyList ( Start_Week date, End_Week date, week_ref int --month_name date ) DECLARE @REPORT_DATE DATETIME, @WEEK_BEGINING VARCHAR(10) SELECT @REPORT_DATE = '2011-01-19T00:00:00' --SELECT @REPORT_DATE = GETDATE() -- should grab the date now. SELECT @WEEK_BEGINING = 'MONDAY' IF @WEEK_BEGINING = 'MONDAY' SET DATEFIRST 1 ELSE IF @WEEK_BEGINING = 'TUESDAY' SET DATEFIRST 2 ELSE IF @WEEK_BEGINING = 'WEDNESDAY' SET DATEFIRST 3 ELSE IF @WEEK_BEGINING = 'THURSDAY' SET DATEFIRST 4 ELSE IF @WEEK_BEGINING = 'FRIDAY' SET DATEFIRST 5 ELSE IF @WEEK_BEGINING = 'SATURDAY' SET DATEFIRST 6 ELSE IF @WEEK_BEGINING = 'SUNDAY' SET DATEFIRST 7 DECLARE @WEEK_START_DATE DATETIME, @WEEK_END_DATE DATETIME --GET THE WEEK START DATE SELECT @WEEK_START_DATE = @REPORT_DATE - (DATEPART(DW, @REPORT_DATE) - 1) --GET THE WEEK END DATE SELECT @WEEK_END_DATE = @REPORT_DATE + (7 - DATEPART(DW, @REPORT_DATE)) PRINT 'Week Start: ' + CONVERT(VARCHAR, @WEEK_START_DATE) PRINT 'Week End: ' + CONVERT(VARCHAR, @WEEK_END_DATE) DECLARE @Interval int = datediff(WEEK,getdate(),@WEEK_START_DATE)+1 --SELECT Start_Week=@WEEK_START_DATE --, End_Week=@WEEK_END_DATE --INTO #WeekList INSERT INTO #WeeklyList SELECT Start_Week=@WEEK_START_DATE, End_Week=@WEEK_END_DATE WHILE @Interval <= 0 BEGIN set @WEEK_START_DATE=DATEADD(WEEK,1,@WEEK_START_DATE) set @WEEK_END_DATE=DATEADD(WEEK,1,@WEEK_END_DATE) INSERT INTO #WeeklyList values (@WEEK_START_DATE,@WEEK_END_DATE) SET @Interval += 1; END SELECT CONVERT(VARCHAR(11), Start_Week, 106) AS 'month_name', CONVERT(VARCHAR(11), End_Week, 106) AS 'End', DATEDIFF(DAY, 0, Start_Week) / 7 AS week_ref -- create the unique week reference number --'VIEW' AS month_name FROM #WeeklyList In this section i am creating the week_ref DATEDIFF(DAY, 0, Start_Week) / 7 AS week_ref -- create the unique week reference number I then need to combine it with this select code: DECLARE @YearString char(3) = CONVERT(char(3), SUBSTRING(CONVERT(char(5), @week_ref), 1, 3)) DECLARE @MonthString char(2) = CONVERT(char(2), SUBSTRING(CONVERT(char(5), @week_ref), 4, 2)) --Convert: DECLARE @Year int = CONVERT(int, @YearString) + 1200 DECLARE @Month int = CONVERT(int, @MonthString) **--THIS FILTERS THE REPORT** SELECT ts.staff_member_ref, sm.common_name, sm.department_name, DATENAME(MONTH, ts.start_dtm) + ' ' + DATENAME(YEAR, ts.start_dtm) AS month_name, ts.timesheet_cat_ref, cat.desc_long AS timesheet_cat_desc, grps.grouping_ref, grps.description AS grouping_desc, ts.task_ref, tsks.task_code, tsks.description AS task_desc, ts.site_ref, sits.description AS site_desc, ts.site_ref AS Expr1, CASE WHEN ts .status = 0 THEN 'Pending' WHEN ts .status = 1 THEN 'Booked' WHEN ts .status = 2 THEN 'Approved' ELSE 'Invalid Status' END AS site_status, ts.booked_time AS booked_time_sum, start_dtm, CONVERT(varchar(20), start_dtm, 108) + ' ' + CONVERT(varchar(20), start_dtm, 103) AS start_dtm_text, booked_time, end_dtm, CONVERT(varchar(20), end_dtm, 108) + ' ' + CONVERT(varchar(20), end_dtm, 103) AS end_dtm_text FROM timesheets AS ts INNER JOIN timesheet_categories AS cat ON ts.timesheet_cat_ref = cat.timesheet_cat_ref INNER JOIN timesheet_tasks AS tsks ON ts.task_ref = tsks.task_ref INNER JOIN timesheet_task_groupings AS grps ON tsks.grouping_ref = grps.grouping_ref INNER JOIN timesheet_sites AS sits ON ts.site_ref = sits.site_ref INNER JOIN vw_staff_members AS sm ON ts.staff_member_ref = sm.staff_member_ref WHERE (ts.status IN (1, 2)) AND (cat.is_leave_category = 0) GROUP BY ts.staff_member_ref, sm.common_name, sm.department_name, DATENAME(MONTH, ts.start_dtm), DATENAME(YEAR, ts.start_dtm), ts.timesheet_cat_ref, cat.desc_long, grps.grouping_ref, grps.description, ts.status, ts.booked_time, ts.task_ref, tsks.task_code, tsks.description, ts.site_ref, sits.description, ts.start_dtm, ts.end_dtm ORDER BY sm.common_name, timesheet_cat_desc, tsks.task_code, site_desc DROP TABLE #WeeklyList GO I want to pass the week_ref into the SELECT statement (refer to comment - THIS FILTERS THE REPORT) but the problem is week_ref isnt a valid column as its derived by code. Any ideas?

    Read the article

  • Problem converting MsSql to MySql Stored procedure

    - by karthik
    Original source of MsSql SP is here.. http://www.codeproject.com/KB/database/InsertGeneratorPack.aspx I am using the below MySql stored procedure, created by SQLWAYS [Tool to convert MsSql to MySql]. The purpose of this is to take backup of selected tables to a script file. when the SP returns a value {Insert statements}. When i Execute the Below SP, i am getting a weird Result Set : SQLWAYS_EVAL# ll(cast(UidSQLWAYS_EVAL# 0)),'0')+''','+SQLWAYS_EVAL# ll(UserNameSQLWAYS_EVAL# '+SQLWAYS_EVAL# ll(PasswordSQLWAYS_EVAL# '+ I see a lot of "SQLWAYS_EVAL#" in the code, which is produced in the result too. What values need to be passed instead of "SQLWAYS_EVAL#". So that i get the proper Insert statements for each record in the table. I am new to MySql. Please help me. Its Urgent. Thanks. DELIMITER $$ DROP PROCEDURE IF EXISTS `InsertGenerator` $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `InsertGenerator`() SWL_return: BEGIN -- SQLWAYS_EVAL# to retrieve column specific information -- SQLWAYS_EVAL# table DECLARE v_string VARCHAR(3000); -- SQLWAYS_EVAL# first half -- SQLWAYS_EVAL# tement DECLARE v_stringData VARCHAR(3000); -- SQLWAYS_EVAL# data -- SQLWAYS_EVAL# statement DECLARE v_dataType VARCHAR(1000); -- SQLWAYS_EVAL# -- SQLWAYS_EVAL# columns DECLARE v_colName VARCHAR(50); DECLARE NO_DATA INT DEFAULT 0; DECLARE cursCol CURSOR FOR SELECT column_name,data_type FROM information_schema.`columns` -- WHERE table_name = v_tableName; WHERE table_name = 'tbl_users'; DECLARE CONTINUE HANDLER FOR SQLEXCEPTION BEGIN SET NO_DATA = -2; END; DECLARE CONTINUE HANDLER FOR NOT FOUND SET NO_DATA = -1; OPEN cursCol; SET v_string = CONCAT('INSERT ',v_tableName,'('); SET v_stringData = ''; SET NO_DATA = 0; FETCH cursCol INTO v_colName,v_dataType; IF NO_DATA <> 0 then -- NOT SUPPORTED print CONCAT('Table ',@tableName, ' not found, processing skipped.') close cursCol; LEAVE SWL_return; end if; WHILE NO_DATA = 0 DO IF v_dataType in('varchar','char','nchar','nvarchar') then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(',v_colName,'SQLWAYS_EVAL# ''+'); ELSE if v_dataType in('text','ntext') then -- SQLWAYS_EVAL# -- SQLWAYS_EVAL# else SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(',v_colName,'SQLWAYS_EVAL# 00)),'''')+'''''',''+'); ELSE IF v_dataType = 'money' then -- SQLWAYS_EVAL# doesn't get converted -- SQLWAYS_EVAL# implicitly SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# y,''''''+ isnull(cast(',v_colName,'SQLWAYS_EVAL# 0)),''0.0000'')+''''''),''+'); ELSE IF v_dataType = 'datetime' then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# time,''''''+ isnull(cast(',v_colName, 'SQLWAYS_EVAL# 0)),''0'')+''''''),''+'); ELSE IF v_dataType = 'image' then SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(convert(varbinary,',v_colName, 'SQLWAYS_EVAL# 6)),''0'')+'''''',''+'); ELSE SET v_stringData = CONCAT(v_stringData,'SQLWAYS_EVAL# ll(cast(',v_colName,'SQLWAYS_EVAL# 0)),''0'')+'''''',''+'); end if; end if; end if; end if; end if; SET v_string = CONCAT(v_string,v_colName,','); SET NO_DATA = 0; FETCH cursCol INTO v_colName,v_dataType; END WHILE; select v_stringData; END $$ DELIMITER ;

    Read the article

  • SQL SERVER – T-SQL Scripts to Find Maximum between Two Numbers

    - by pinaldave
    There are plenty of the things life one can make it simple. I really believe in the same. I was yesterday traveling for community related activity. On airport while returning I met a SQL Enthusiast. He asked me if there is any simple way to find maximum between two numbers in the SQL Server. I asked him back that what he really mean by Simple Way and requested him to demonstrate his code for finding maximum between two numbers. Here is his code: DECLARE @Value1 DECIMAL(5,2) = 9.22 DECLARE @Value2 DECIMAL(5,2) = 8.34 SELECT (0.5 * ((@Value1 + @Value2) + ABS(@Value1 - @Value2))) AS MaxColumn GO I thought his logic was accurate but the same script can be written another way. I quickly wrote following code for him and which worked just fine for him. Here is my code: DECLARE @Value1 DECIMAL(5,2) = 9.22 DECLARE @Value2 DECIMAL(5,2) = 8.34 SELECT CASE WHEN @Value1 > @Value2 THEN @Value1 ELSE @Value2 END AS MaxColumn GO He agreed that my code is much simpler but as per him there is some problem with my code which apparently he does not remember at this time. There are cases when his code will give accurate values and my code will not. I think his comment has value but both of us for the moment could not come up with any valid reason. Do you think any scenario where his code will work and my suggested code will not work? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Field symbol and Data reference in SAP-ABAP

    - by PDP-21
    If we compare field symbol and data refernece with that of the pointer in C i concluded that :- In C language, Say we declare a variable "var" type "Integer" with default value "5". The variable "var" will be stored some where in the memory and say the memory address which holds this variable is "1000". Now we define a pointer "ptr" and this pointer is assigned to our variable. So, "&ptr" will be "1000" and " *ptr " will be 5. Lets comapre the above situation in SAP ABAP. Here we declare a Field symbol "FS" and assign that to the variable "var". Now my question is what "FS" holds ? I have searched this rigorously in the internet but found out many ABAP consultants have the opinion that FS holds the address of the variable i.e. 1000. But that is wrong. While debugging i found out that fs holds only 5. So fs (in ABAP) is equivalent to *ptr (in C). Please correct me if my understanding is wrong. Now lets declare a data reference "dref" and another filed symbol "fsym" and after creating the data reference we assign the same to field symbol . Now we can do operations on this field symbol. So the difference between data refernec and field symbol is :- in case of field symbol first we will declare a variable and assign it to a field symbol. in case of data reference first we craete a data reference and then assign that to field symbol. Then what is the use of data reference? The same functionality we can achive through field symbol also.

    Read the article

  • T-SQL Tuesday #31 - Logging Tricks with CONTEXT_INFO

    - by Most Valuable Yak (Rob Volk)
    This month's T-SQL Tuesday is being hosted by Aaron Nelson [b | t], fellow Atlantan (the city in Georgia, not the famous sunken city, or the resort in the Bahamas) and covers the topic of logging (the recording of information, not the harvesting of trees) and maintains the fine T-SQL Tuesday tradition begun by Adam Machanic [b | t] (the SQL Server guru, not the guy who fixes cars, check the spelling again, there will be a quiz later). This is a trick I learned from Fernando Guerrero [b | t] waaaaaay back during the PASS Summit 2004 in sunny, hurricane-infested Orlando, during his session on Secret SQL Server (not sure if that's the correct title, and I haven't used parentheses in this paragraph yet).  CONTEXT_INFO is a neat little feature that's existed since SQL Server 2000 and perhaps even earlier.  It lets you assign data to the current session/connection, and maintains that data until you disconnect or change it.  In addition to the CONTEXT_INFO() function, you can also query the context_info column in sys.dm_exec_sessions, or even sysprocesses if you're still running SQL Server 2000, if you need to see it for another session. While you're limited to 128 bytes, one big advantage that CONTEXT_INFO has is that it's independent of any transactions.  If you've ever logged to a table in a transaction and then lost messages when it rolled back, you can understand how aggravating it can be.  CONTEXT_INFO also survives across multiple SQL batches (GO separators) in the same connection, so for those of you who were going to suggest "just log to a table variable, they don't get rolled back":  HA-HA, I GOT YOU!  Since GO starts a new batch all variable declarations are lost. Here's a simple example I recently used at work.  I had to test database mirroring configurations for disaster recovery scenarios and measure the network throughput.  I also needed to log how long it took for the script to run and include the mirror settings for the database in question.  I decided to use AdventureWorks as my database model, and Adam Machanic's Big Adventure script to provide a fairly large workload that's repeatable and easily scalable.  My test would consist of several copies of AdventureWorks running the Big Adventure script while I mirrored the databases (or not). Since Adam's script contains several batches, I decided CONTEXT_INFO would have to be used.  As it turns out, I only needed to grab the start time at the beginning, I could get the rest of the data at the end of the process.   The code is pretty small: declare @time binary(128)=cast(getdate() as binary(8)) set context_info @time   ... rest of Big Adventure code ...   go use master; insert mirror_test(server,role,partner,db,state,safety,start,duration) select @@servername, mirroring_role_desc, mirroring_partner_instance, db_name(database_id), mirroring_state_desc, mirroring_safety_level_desc, cast(cast(context_info() as binary(8)) as datetime), datediff(s,cast(cast(context_info() as binary(8)) as datetime),getdate()) from sys.database_mirroring where db_name(database_id) like 'Adv%';   I declared @time as a binary(128) since CONTEXT_INFO is defined that way.  I couldn't convert GETDATE() to binary(128) as it would pad the first 120 bytes as 0x00.  To keep the CAST functions simple and avoid using SUBSTRING, I decided to CAST GETDATE() as binary(8) and let SQL Server do the implicit conversion.  It's not the safest way perhaps, but it works on my machine. :) As I mentioned earlier, you can query system views for sessions and get their CONTEXT_INFO.  With a little boilerplate code this can be used to monitor long-running procedures, in case you need to kill a process, or are just curious  how long certain parts take.  In this example, I added code to Adam's Big Adventure script to set CONTEXT_INFO messages at strategic places I want to monitor.  (His code is in UPPERCASE as it was in the original, mine is all lowercase): declare @msg binary(128) set @msg=cast('Altering bigProduct.ProductID' as binary(128)) set context_info @msg go ALTER TABLE bigProduct ALTER COLUMN ProductID INT NOT NULL GO set context_info 0x0 go declare @msg1 binary(128) set @msg1=cast('Adding pk_bigProduct Constraint' as binary(128)) set context_info @msg1 go ALTER TABLE bigProduct ADD CONSTRAINT pk_bigProduct PRIMARY KEY (ProductID) GO set context_info 0x0 go declare @msg2 binary(128) set @msg2=cast('Altering bigTransactionHistory.TransactionID' as binary(128)) set context_info @msg2 go ALTER TABLE bigTransactionHistory ALTER COLUMN TransactionID INT NOT NULL GO set context_info 0x0 go declare @msg3 binary(128) set @msg3=cast('Adding pk_bigTransactionHistory Constraint' as binary(128)) set context_info @msg3 go ALTER TABLE bigTransactionHistory ADD CONSTRAINT pk_bigTransactionHistory PRIMARY KEY NONCLUSTERED(TransactionID) GO set context_info 0x0 go declare @msg4 binary(128) set @msg4=cast('Creating IX_ProductId_TransactionDate Index' as binary(128)) set context_info @msg4 go CREATE NONCLUSTERED INDEX IX_ProductId_TransactionDate ON bigTransactionHistory(ProductId,TransactionDate) INCLUDE(Quantity,ActualCost) GO set context_info 0x0   This doesn't include the entire script, only those portions that altered a table or created an index.  One annoyance is that SET CONTEXT_INFO requires a literal or variable, you can't use an expression.  And since GO starts a new batch I need to declare a variable in each one.  And of course I have to use CAST because it won't implicitly convert varchar to binary.  And even though context_info is a nullable column, you can't SET CONTEXT_INFO NULL, so I have to use SET CONTEXT_INFO 0x0 to clear the message after the statement completes.  And if you're thinking of turning this into a UDF, you can't, although a stored procedure would work. So what does all this aggravation get you?  As the code runs, if I want to see which stage the session is at, I can run the following (assuming SPID 51 is the one I want): select CAST(context_info as varchar(128)) from sys.dm_exec_sessions where session_id=51   Since SQL Server 2005 introduced the new system and dynamic management views (DMVs) there's not as much need for tagging a session with these kinds of messages.  You can get the session start time and currently executing statement from them, and neatly presented if you use Adam's sp_whoisactive utility (and you absolutely should be using it).  Of course you can always use xp_cmdshell, a CLR function, or some other tricks to log information outside of a SQL transaction.  All the same, I've used this trick to monitor long-running reports at a previous job, and I still think CONTEXT_INFO is a great feature, especially if you're still using SQL Server 2000 or want to supplement your instrumentation.  If you'd like an exercise, consider adding the system time to the messages in the last example, and an automated job to query and parse it from the system tables.  That would let you track how long each statement ran without having to run Profiler. #TSQL2sDay

    Read the article

  • T-SQL Tuesday #31 - Logging Tricks with CONTEXT_INFO

    - by Most Valuable Yak (Rob Volk)
    This month's T-SQL Tuesday is being hosted by Aaron Nelson [b | t], fellow Atlantan (the city in Georgia, not the famous sunken city, or the resort in the Bahamas) and covers the topic of logging (the recording of information, not the harvesting of trees) and maintains the fine T-SQL Tuesday tradition begun by Adam Machanic [b | t] (the SQL Server guru, not the guy who fixes cars, check the spelling again, there will be a quiz later). This is a trick I learned from Fernando Guerrero [b | t] waaaaaay back during the PASS Summit 2004 in sunny, hurricane-infested Orlando, during his session on Secret SQL Server (not sure if that's the correct title, and I haven't used parentheses in this paragraph yet).  CONTEXT_INFO is a neat little feature that's existed since SQL Server 2000 and perhaps even earlier.  It lets you assign data to the current session/connection, and maintains that data until you disconnect or change it.  In addition to the CONTEXT_INFO() function, you can also query the context_info column in sys.dm_exec_sessions, or even sysprocesses if you're still running SQL Server 2000, if you need to see it for another session. While you're limited to 128 bytes, one big advantage that CONTEXT_INFO has is that it's independent of any transactions.  If you've ever logged to a table in a transaction and then lost messages when it rolled back, you can understand how aggravating it can be.  CONTEXT_INFO also survives across multiple SQL batches (GO separators) in the same connection, so for those of you who were going to suggest "just log to a table variable, they don't get rolled back":  HA-HA, I GOT YOU!  Since GO starts a new batch all variable declarations are lost. Here's a simple example I recently used at work.  I had to test database mirroring configurations for disaster recovery scenarios and measure the network throughput.  I also needed to log how long it took for the script to run and include the mirror settings for the database in question.  I decided to use AdventureWorks as my database model, and Adam Machanic's Big Adventure script to provide a fairly large workload that's repeatable and easily scalable.  My test would consist of several copies of AdventureWorks running the Big Adventure script while I mirrored the databases (or not). Since Adam's script contains several batches, I decided CONTEXT_INFO would have to be used.  As it turns out, I only needed to grab the start time at the beginning, I could get the rest of the data at the end of the process.   The code is pretty small: declare @time binary(128)=cast(getdate() as binary(8)) set context_info @time   ... rest of Big Adventure code ...   go use master; insert mirror_test(server,role,partner,db,state,safety,start,duration) select @@servername, mirroring_role_desc, mirroring_partner_instance, db_name(database_id), mirroring_state_desc, mirroring_safety_level_desc, cast(cast(context_info() as binary(8)) as datetime), datediff(s,cast(cast(context_info() as binary(8)) as datetime),getdate()) from sys.database_mirroring where db_name(database_id) like 'Adv%';   I declared @time as a binary(128) since CONTEXT_INFO is defined that way.  I couldn't convert GETDATE() to binary(128) as it would pad the first 120 bytes as 0x00.  To keep the CAST functions simple and avoid using SUBSTRING, I decided to CAST GETDATE() as binary(8) and let SQL Server do the implicit conversion.  It's not the safest way perhaps, but it works on my machine. :) As I mentioned earlier, you can query system views for sessions and get their CONTEXT_INFO.  With a little boilerplate code this can be used to monitor long-running procedures, in case you need to kill a process, or are just curious  how long certain parts take.  In this example, I added code to Adam's Big Adventure script to set CONTEXT_INFO messages at strategic places I want to monitor.  (His code is in UPPERCASE as it was in the original, mine is all lowercase): declare @msg binary(128) set @msg=cast('Altering bigProduct.ProductID' as binary(128)) set context_info @msg go ALTER TABLE bigProduct ALTER COLUMN ProductID INT NOT NULL GO set context_info 0x0 go declare @msg1 binary(128) set @msg1=cast('Adding pk_bigProduct Constraint' as binary(128)) set context_info @msg1 go ALTER TABLE bigProduct ADD CONSTRAINT pk_bigProduct PRIMARY KEY (ProductID) GO set context_info 0x0 go declare @msg2 binary(128) set @msg2=cast('Altering bigTransactionHistory.TransactionID' as binary(128)) set context_info @msg2 go ALTER TABLE bigTransactionHistory ALTER COLUMN TransactionID INT NOT NULL GO set context_info 0x0 go declare @msg3 binary(128) set @msg3=cast('Adding pk_bigTransactionHistory Constraint' as binary(128)) set context_info @msg3 go ALTER TABLE bigTransactionHistory ADD CONSTRAINT pk_bigTransactionHistory PRIMARY KEY NONCLUSTERED(TransactionID) GO set context_info 0x0 go declare @msg4 binary(128) set @msg4=cast('Creating IX_ProductId_TransactionDate Index' as binary(128)) set context_info @msg4 go CREATE NONCLUSTERED INDEX IX_ProductId_TransactionDate ON bigTransactionHistory(ProductId,TransactionDate) INCLUDE(Quantity,ActualCost) GO set context_info 0x0   This doesn't include the entire script, only those portions that altered a table or created an index.  One annoyance is that SET CONTEXT_INFO requires a literal or variable, you can't use an expression.  And since GO starts a new batch I need to declare a variable in each one.  And of course I have to use CAST because it won't implicitly convert varchar to binary.  And even though context_info is a nullable column, you can't SET CONTEXT_INFO NULL, so I have to use SET CONTEXT_INFO 0x0 to clear the message after the statement completes.  And if you're thinking of turning this into a UDF, you can't, although a stored procedure would work. So what does all this aggravation get you?  As the code runs, if I want to see which stage the session is at, I can run the following (assuming SPID 51 is the one I want): select CAST(context_info as varchar(128)) from sys.dm_exec_sessions where session_id=51   Since SQL Server 2005 introduced the new system and dynamic management views (DMVs) there's not as much need for tagging a session with these kinds of messages.  You can get the session start time and currently executing statement from them, and neatly presented if you use Adam's sp_whoisactive utility (and you absolutely should be using it).  Of course you can always use xp_cmdshell, a CLR function, or some other tricks to log information outside of a SQL transaction.  All the same, I've used this trick to monitor long-running reports at a previous job, and I still think CONTEXT_INFO is a great feature, especially if you're still using SQL Server 2000 or want to supplement your instrumentation.  If you'd like an exercise, consider adding the system time to the messages in the last example, and an automated job to query and parse it from the system tables.  That would let you track how long each statement ran without having to run Profiler. #TSQL2sDay

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >