Search Results

Search found 27403 results on 1097 pages for 'nova sql'.

Page 99/1097 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Using SQL dB column as a lock for concurrent operations in Entity Framework

    - by Sid
    We have a long running user operation that is handled by a pool of worker processes. Data input and output is from Azure SQL. The master Azure SQL table structure columns are approximated to [UserId, col1, col2, ... , col N, beingProcessed, lastTimeProcessed ] beingProcessed is boolean and lastTimeProcessed is DateTime. The logic in every worker role is: public void WorkerRoleMain() { while(true) { try { dbContext db = new dbContext(); // Read foreach (UserProfile user in db.UserProfile .Where(u => DateTime.UtcNow.Subtract(u.lastTimeProcessed) > TimeSpan.FromHours(24) & u.beingProcessed == false)) { user.beingProcessed = true; // Modify db.SaveChanges(); // Write // Do some long drawn processing here ... ... ... user.lastTimeProcessed = DateTime.UtcNow; user.beingProcessed = false; db.SaveChanges(); } } catch(Exception ex) { LogException(ex); Sleep(TimeSpan.FromMinutes(5)); } } // while () } With multiple workers processing as above (each with their own Entity Framework layer), in essence beingProcessed is being used a lock for MutEx purposes Question: How can I deal with concurrency issues on the beingProcessed "lock" itself based on the above load? I think read-modify-write operation on the beingProcessed needs to be atomic but I'm open to other strategies. Open to other code refinements too.

    Read the article

  • Group / User based security. Table / SQL question

    - by Brett
    Hi, I'm setting up a group / user based security system. I have 4 tables as follows: user groups group_user_mappings acl where acl is the mapping between an item_id and either a group or a user. The way I've done the acl table, I have 3 columns of note (actually 4th one as an auto-id, but that is irrelevant) col 1 item_id (item to access) col 3 user_id (user that is allowed to access) col 3 group_id (group that is allowed to access) So for example item1, peter, , item2, , group1 item3, jane, , so either the acl will give access to a user or a group. Any one line in the ACL table with either have an item - user mapping, or an item group. If I want to have a query that returns all objects a user has access to, I think I need to have a SQL query with a UNION, because I need 2 separate queries that join like.. item - acl - group - user AND item - acl - user This I guess will work OK. Is this how its normally done? Am I doing this the right way? Seems a little messy. I was thinking I could get around it by creating a single user group for each person, so I only ever deal with groups in my SQL, but this seems a little messy as well..

    Read the article

  • SQL Server: preventing dirty reads in a stored procedure

    - by pcampbell
    Consider a SQL Server database and its two stored procs: *1. A proc that performs 3 important things in a transaction: Create a customer, call a sproc to perform another insert, and conditionally insert a third record with the new identity. BEGIN TRAN INSERT INTO Customer(CustName) (@CustomerName) SELECT @NewID = SCOPE_IDENTITY() EXEC CreateNewCustomerAccount @NewID, @CustomerPhoneNumber IF @InvoiceTotal > 100000 INSERT INTO PreferredCust(InvoiceTotal, CustID) VALUES (@InvoiceTotal, @NewID) COMMIT TRAN *2. A stored proc which polls the Customer table for new entries that don't have a related PreferredCust entry. The client app performs the polling by calling this stored proc every 500ms. A problem has arisen where the polling stored procedure has found an entry in the Customer table, and returned it as part of its results. The problem was that it has picked up that record, I am assuming, as part of a dirty read. The record ended up having an entry in PreferredCust later, and ended up creating a problem downstream. Question How can you explicitly prevent dirty reads by that second stored proc? The environment is SQL Server 2005 with the default configuration out of the box. No other locking hits are given in either of these stored procedures.

    Read the article

  • Update or Insert Row depending on whether row is present in Microsoft SQL Server 2005

    - by Srikanth
    Hi, I am passing a XML document as a input to a stored procedure in Microsoft SQL Server 2005. This is the sample XML being passed as input <Strategy StrategyID="0" TOStrategyID="8" ShutdownQtySell="1" ShutdownQtyBuy="1"> <ParameterRange ParameterSetID="6" ParameterRangeID="1" ParameterRangeFrom="0" ParameterRangeTo="20" ParameterAutoTakeOut="False"> </ParameterRange> <ParameterRange ParameterSetID="6" ParameterRangeID="4" ParameterRangeFrom="21" ParameterRangeTo="40" ParameterAutoTakeOut="False"> </ParameterRange> <ParameterRange ParameterSetID="6" ParameterRangeID="5" ParameterRangeFrom="41" ParameterRangeTo="60" ParameterAutoTakeOut="False"> </ParameterRange> <ParameterRange ParameterSetID="6" ParameterRangeID="6" ParameterRangeFrom="61" ParameterRangeTo="80" ParameterAutoTakeOut="False"> </ParameterRange> <ParameterRange ParameterSetID="6" ParameterRangeID="7" ParameterRangeFrom="81" ParameterRangeTo="100" ParameterAutoTakeOut="False"> </ParameterRange> </Strategy> I am able to retrieve the data using OpenXML functionality in SQL server I am using this to get the data corresponding to ParameterRange rows SELECT ParameterRangeID as iRangeID, ParameterSetID as iSetID, ParameterRangeFrom as fRangeFrom, ParameterRangeTo as fRangeTo, ParameterAutoTakeOut as bTakeoutEnabled FROM OPENXML(@idoc, '/Strategy/ParameterRange', 1) WITH (ParameterSetID int,ParameterRangeID int,ParameterRangeFrom float,ParameterRangeTo float,ParameterAutoTakeOut bit) Now, I need to insert/update these rows into a table TempRanges which has (iRangeID,iSetID) as the primary key. If there is a row with the primary key, I want to update it the latest values and If there is no row with that primary key, I need to insert into the table. How can I accomplish this inside the Stored Procedure ? Thanks, Sri

    Read the article

  • Database table relationships: Always also relate to specified value (Linq to SQL in .NET Framework)

    - by sinni800
    I really can not describe my question better in the title. If anyone has suggestions: Please tell! I use the Linq to SQL framework in .NET. I ran into something which could be easily solved if the framework supported this, it would be a lot of extra coding otherwise: I have a n to n relation with a helper table in between. Those tables are: Items, places and the connection table which relates items to places and the other way. One item can be found in many places, so can one place have many items. Now of course there will be many items which will be in ALL places. Now there is a problem: Places can always be added. So I need a place-ID which encompasses ALL places, always. Like maybe a place-id "0". If the helper table has a row with the place-id of zero, this should be visible in all places. In SQL this would be a simple "Where [...] or place-id = 0", but how do I do this in Linq relations? Also, for a little side question: How could I manage "all but this place" kind of exclusions?

    Read the article

  • 'LINQ query plan' horribly inefficient but 'Query Analyser query plan' is perfect for same SQL!

    - by Simon_Weaver
    I have a LINQ to SQL query that generates the following SQL : exec sp_executesql N'SELECT COUNT(*) AS [value] FROM [dbo].[SessionVisit] AS [t0] WHERE ([t0].[VisitedStore] = @p0) AND (NOT ([t0].[Bot] = 1)) AND ([t0].[SessionDate] > @p1)',N'@p0 int,@p1 datetime', @p0=1,@p1='2010-02-15 01:24:00' (This is the actual SQL taken from SQL Profiler on SQL Server 2008.) The query plan generated when I run this SQL from within Query Analyser is perfect. It uses an index containing VisitedStore, Bot, SessionDate. The query returns instantly. However when I run this from C# (with LINQ) a different query plan is used that is so inefficient it doesn't even return in 60 seconds. This query plan is trying to do a key lookup on the clustered primary key which contains a couple million rows. It has no chance of returning. What I just can't understand though is that the EXACT same SQL is being run - either from within LINQ or from within Query Analyser yet the query plan is different. I've ran the two queries many many times and they're now running in isolation from any other queries. The date is DateTime.Now.AddDays(-7), but I've even hardcoded that date to eliminate caching problems. Is there anything i can change in LINQ to SQL to affect the query plan or try to debug this further? I'm very very confused!

    Read the article

  • SQL: select rows with the same order as IN clause

    - by Andrea3000
    I know that this question has been asked several times and I've read all the answer but none of them seem to completely solve my problem. I'm switching from a mySQL database to a MS Access database with MS SQL. In both of the case I use a php script to connect to the database and perform SQL queries. I need to find a suitable replacement for a query I used to perform on mySQL. I want to: perform a first query and order records alphabetically based on one of the columns construct a list of IDs which reflects the previous alphabetical order perform a second query with the IN clause applied with the IDs' list and ordered by this list. In mySQL I used to perform the last query this way: SELECT name FROM users WHERE id IN ($name_ids) ORDER BY FIND_IN_SET(id,'$name_ids') Since FIND_IN_SET is available only in mySQL and CHARINDEX and PATINDEX are not available from my php script, how can I achieve this? I know that I could write something like: SELECT name FROM users WHERE id IN ($name_ids) ORDER BY CASE id WHEN ... THEN 1 WHEN ... THEN 2 WHEN ... THEN 3 WHEN ... THEN 4 END but you have to consider that: IDs' list has variable length and elements because it depends on the first query that list can easily contains thousands of elements Have you got any hint on this? Is there a way to programmatically construct the ORDER BY CASE ... WHEN ... statement? Is there a better approach since my list of IDs can be big?

    Read the article

  • Generated sql from LINQ to SQL

    - by Muhammad Kashif Nadeem
    Following code ProductPricesDataContext db = new ProductPricesDataContext(); var products = from p in db.Products where p.ProductFields.Count > 3 select new { ProductIDD = p.ProductId, ProductName = p.ProductName.Contains("hotel"), NumbeOfProd = p.ProductFields.Count, totalFields = p.ProductFields.Sum(o => o.FieldId + o.FieldId) }; Generated follwing sql SELECT [t0].[ProductId] AS [ProductIDD], (CASE WHEN [t0].[ProductName] LIKE '%hotel%' THEN 1 WHEN NOT ([t0].[ProductName] LIKE '%hotel%') THEN 0 ELSE NULL END) AS [ProductName], ( SELECT COUNT(*) FROM [dbo].[ProductField] AS [t2] WHERE [t2].[ProductId] = [t0].[ProductId] ) AS [NumbeOfProd], ( SELECT SUM([t3].[FieldId] + [t3].[FieldId]) FROM [dbo].[ProductField] AS [t3] WHERE [t3].[ProductId] = [t0].[ProductId]) AS [totalFields] FROM [dbo].[Product] AS [t0] WHERE (( SELECT COUNT(*) FROM [dbo].[ProductField] AS [t1] WHERE [t1].[ProductId] = [t0].[ProductId] )) > 3 Why is this CASE statement for ProductName and because of this instead of ProductName i am just getting 0 in my result set. It should generate sql like following, (where ProductName like '%hotel%' SELECT [t0].[ProductId] AS [ProductIDD], [ProductName], ( SELECT COUNT(*) FROM [dbo].[ProductField] AS [t2] WHERE [t2].[ProductId] = [t0].[ProductId] ) AS [NumbeOfProd], ( SELECT SUM([t3].[FieldId] + [t3].[FieldId]) FROM [dbo].[ProductField] AS [t3] WHERE [t3].[ProductId] = [t0].[ProductId]) AS [totalFields] FROM [dbo].[Product] AS [t0] WHERE (( SELECT COUNT(*) FROM [dbo].[ProductField] AS [t1] WHERE [t1].[ProductId] = [t0].[ProductId] )) > 3 AND t0.ProductName like '%hotel%' Thanks.

    Read the article

  • Interesting LinqToSql behaviour

    - by Ben Robinson
    We have a database table that stores the location of some wave files plus related meta data. There is a foreign key (employeeid) on the table that links to an employee table. However not all wav files relate to an employee, for these records employeeid is null. We are using LinqToSQl to access the database, the query to pull out all non employee related wav file records is as follows: var results = from Wavs in db.WaveFiles where Wavs.employeeid == null; Except this returns no records, despite the fact that there are records where employeeid is null. On profiling sql server i discovered the reason no records are returned is because LinqToSQl is turning it into SQL that looks very much like: SELECT Field1, Field2 //etc FROM WaveFiles WHERE 1=0 Obviously this returns no rows. However if I go into the DBML designer and remove the association and save. All of a sudden the exact same LINQ query turns into SELECT Field1, Field2 //etc FROM WaveFiles WHERE EmployeeID IS NULL I.e. if there is an association then LinqToSql assumes that all records have a value for the foreign key (even though it is nullable and the property appears as a nullable int on the WaveFile entity) and as such deliverately constructs a where clause that will return no records. Does anyone know if there is a way to keep the association in LinqToSQL but stop this behaviour. A workaround i can think of quickly is to have a calculated field called IsSystemFile and set it to 1 if employeeid is null and 0 otherwise. However this seems like a bit of a hack to work around strange behaviour of LinqToSQl and i would rather do something in the DBML file or define something on the foreign key constraint that will prevent this behaviour.

    Read the article

  • Complex SQL design, help/advice needed

    - by eugeneK
    Hi, i have few questions for SQL gurus in here ... Briefly this is ads management system where user can define campaigns for different countries, categories, languages. I have few questions in mind so help me with what you can. Generally i'm using ASP.NET and i want to cache all result set of certain user once he asks for statistics for the first time, this way i will avoid large round-trips to server. any help is welcomed Click here for diagram with all details you need for my questions 1.Main issue of this application is to show to the user how many clicks/impressions were and how much money he spent on campaign. What is the easiest way to get this information for him? I will also include filtering by date, date ranges and few other params in this statistics table. 2.Other issue is what happens when user will try to edit campaign. Old campaign will die this means if user set 0.01$ as campaignPPU (pay-per-unit) and next day updates it to 0.05$ all will be reset to 0.05$. 3.If you could re-design some parts of table design so it would be more flexible and easier to modify, how would you do it? Thanks... sorry for so large job but it may interest some SQL guys in here

    Read the article

  • PHP -- automatic SQL injection protection?

    - by ashgromnies
    I took over maintenance of a PHP app recently and I'm not super familiar with PHP but some of the things I've been seeing on the site are making me nervous that it could be vulnerable to a SQL injection attack. For example, see how this code for logging into the administrative section works: $password = md5(HASH_SALT . $_POST['loginPass']); $query = "SELECT * FROM `administrators` WHERE `active`='1' AND `email`='{$_POST['loginEmail']}' AND `password`='{$password}'"; $userInfo = db_fetch_array(db_query($query)); if($userInfo['id']) { $_SESSION['adminLoggedIn'] = true; // user is logged in, other junk happens here, not important The creators of the site made a special db_query method and db_fetch_array method, shown here: function db_query($qstring,$print=0) { return @mysql(DB_NAME,$qstring); } function db_fetch_array($qhandle) { return @mysql_fetch_array($qhandle); } Now, this makes me think I should be able to do some sort of SQL injection attack with an email address like: ' OR 'x'='x' LIMIT 1; and some random password. When I use that on the command line, I get an administrative user back, but when I try it in the application, I get an invalid username/password error, like I should. Could there be some sort of global PHP configuration they have enabled to block these attacks? Where would that be configured? Here is the PHP --version information: # php --version PHP 5.2.12 (cli) (built: Feb 28 2010 15:59:21) Copyright (c) 1997-2009 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2009 Zend Technologies with the ionCube PHP Loader v3.3.14, Copyright (c) 2002-2010, by ionCube Ltd., and with Zend Optimizer v3.3.9, Copyright (c) 1998-2009, by Zend Technologies

    Read the article

  • ORDER BY in a Sql Server 2008 view

    - by eidylon
    Hi all... we have a view in our database which has an ORDER BY in it. Now, I realize views generally don't order, because different people may use it for different things, and want it differently ordered. This view however is used for a VERY SPECIFIC use-case which demands a certain order. (It is team standings for a soccer league.) The database is Sql Server 2008 Express, v.10.0.1763.0 on a Windows Server 2003 R2 box. The view is defined as such: CREATE VIEW season.CurrentStandingsOrdered AS SELECT TOP 100 PERCENT *, season.GetRanking(TEAMID) RANKING FROM season.CurrentStandings ORDER BY GENDER, TEAMYEAR, CODE, POINTS DESC, FORFEITS, GOALS_AGAINST, GOALS_FOR DESC, DIFFERENTIAL, RANKING It returns: GENDER, TEAMYEAR, CODE, TEAMID, CLUB, NAME, WINS, LOSSES, TIES, GOALS_FOR, GOALS_AGAINST, DIFFERENTIAL, POINTS, FORFEITS, RANKING Now, when I run a SELECT against the view, it orders the results by GENDER, TEAMYEAR, CODE, TEAMID. Notice that it is ordering by TEAMID instead of POINTS as the order by clause specifies. However, if I copy the SQL statement and run it exactly as is in a new query window, it orders correctly as specified by the ORDER BY clause.

    Read the article

  • Linq to SQL - Failing to update

    - by Isaac
    Hello, I'm having some troubles with updating linq to sql entities. For some reason, I can update every single field of my item entity besides name. Here are two simple tests I wrote: [TestMethod] public void TestUpdateName( ) { using ( var context = new SimoneDataContext( ) ) { Item item = context.Items.First( ); if ( item != null ) { item.Name = "My New Name"; context.SubmitChanges( ); } } } [TestMethod] public void TestUpdateMPN( ) { using ( var context = new SimoneDataContext( ) ) { Item item = context.Items.First( ); if ( item != null ) { item.MPN = "My New MPN"; context.SubmitChanges( ); } } } Unfortunately, TestUpdateName() fails with the following error: System.Data.SqlClient.SqlException: Incorrect syntax near the keyword 'WHERE'.. And here's the outputted SQL: UPDATE [dbo].[Items] SET WHERE ([Id] = @p0) AND ([CategoryId] = @p1) AND ([MPN] = @p2) AND ([Height] = @p3) AND ([Width] = @p4) AND ([Weight] = @p5) AND ([Length] = @p6) AND ([AdministrativeCost] = @p7) -- @p0: Input Int (Size = 0; Prec = 0; Scale = 0) [1] -- @p1: Input Int (Size = 0; Prec = 0; Scale = 0) [1] -- @p2: Input VarChar (Size = 10; Prec = 0; Scale = 0) [My New MPN] -- @p3: Input Decimal (Size = 0; Prec = 5; Scale = 3) [30.000] -- @p4: Input Decimal (Size = 0; Prec = 5; Scale = 3) [10.000] -- @p5: Input Decimal (Size = 0; Prec = 5; Scale = 3) [40.000] -- @p6: Input Decimal (Size = 0; Prec = 5; Scale = 3) [30.000] -- @p7: Input Money (Size = 0; Prec = 19; Scale = 4) [350.0000] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 3.5.30729.4926 As you can see, no update is being generated (SET is empty ...) I have no clue why is this happening. And already in advance ... YES, the table Item has a PK (Id). Thank you in advance!

    Read the article

  • Remote connection to SQL Server Express fails

    - by worlds-apart89
    I have two computers that share the same Internet IP address. Using one of the computers, I can remotely connect to a SQL Server database on the other. Here is my connection string: SqlConnection connection = new SqlConnection(@"Data Source=192.168.1.101\SQLEXPRESSNI,1433;Network Library=DBMSSOCN;Initial Catalog=FirstDB;Persist Security Info=True;User ID=username;Password=password;"); 192.168.1.101 is the server, SQLEXPRESSNI is the SQL Server instance name, and FirstDB is the name of the database. Now, I have another computer with a different Internet IP address. I want to connect to the server above using the third computer that does not belong to my local area network. I dont have access to that third computer at the moment, so I want to use (if possible) the client computer in LAN again. SqlConnection connection = new SqlConnection(@"Data Source=SharedInternetIP\SQLEXPRESSNI,1433;Network Library=DBMSSOCN;Initial Catalog=FirstDB;Persist Security Info=True;User ID=username;Password=password;"); Does not work Note that I am a beginner, so I am not quite sure what I am doing even though I know what I want to do. By passing the Internet IP to the SqlConnection object rather than the local IP address, how can I successfully connect to the server computer, using the client computer in the same network? Also note that my ultimate goal is to connect to the server with an external client, but I don't have access to that computer right now. I'd appreciate any help.

    Read the article

  • What's the SQL function for timestamp manipulation?

    - by Eric Leroy
    I'm new to SQL and the time functions are different than mySQL so I'm having a terrible time finding a good site reference with USEFUL timestamp queries. I'm not able to locate the correct way of doing this query in SQL: Id Timestamp ----------------------------------- 1145744 2012-10-10 18:15:11.500 1145743 2012-10-10 18:15:11.313 1145742 2012-10-10 18:15:11.313 1145741 2012-10-10 18:15:11.253 1145740 2012-10-10 18:15:11.190 1145739 2012-10-10 18:15:11.190 1145738 2012-10-10 18:15:11.127 1145737 2012-10-10 18:15:11.067 1145736 2012-10-10 18:15:11.063 1145735 2012-10-10 18:15:10.940 1145734 2012-10-10 18:15:10.817 SELECT * from table WHERE Timestamp ... RANGE I need the range of 2 timestamps so I can select rows by the following parameters: second range minute range hour range day range week range month range year range Is there one function to put in 2 timestamps and get the range? or is this a mix of functions I need? Any good site references would be greatly appriceated. MSDN site isn't helping me isolate the proper way of doing this. I've been searching for about an hour trying to get the last day from 1:30PM to 1:30PM today.

    Read the article

  • .Net SQL Parameter for String List Problem

    - by JK
    I am writing a C# program in which I send a query to SQL Server to be processed and a dataset returns. I am using parameters to pass information to the query before it is sent to SQL server. This works fine except in the situation below. The query looks like this: reportQuery = " Select * From tableName Where Account_Number in (@AccountNum); and Account_Date = @AccountDate "; The AccountDate parameter works find but not the AccountNum parameter. I need the final query to execute like this: Select * From tableName Where Account_Number in ('AX3456','YZYL123','ZZZ123'); and Account_Date = '1-Jan-2010' The problem is that I have these account numbers (actually text) in a C# string list. To feed it to the parameter, I have been declaring the parameter as a string. I turn the list into one string and feed it to the parameter. I think the problem is that I am feeding the paramater this: "'AX3456','YZYL123','ZZZ123'" when it wants this 'AX3456','YZYL123','ZZZ123' How do I get the string list into the query using a parameter and have it execute as shown above? This is how I am declaring and assigning the parameter. SqlParameter AccountNumsParam = new SqlParameter(); AccountNumsParam.ParameterName = "@AccountNums"; AccountNumsParam.SqlDbType = SqlDbType.NVarChar; AccountNumsParam.Value = AccountNumsString; FYI, AccountNumString == "'AX3456','YZYL123','ZZZ123'"

    Read the article

  • Error while closing SQL Connection

    - by Wickedman84
    I have a problem with closing the SQLconnection in my application. My application is in VB.net. I have a reference in my application to a class with code to open and close the database connection and to execute all sql scripts. The error occurs when i close my application. In the formClosing event of my main form I call a function that closes all the connections. But just before I close the connections I perform an SQLquery to delete a row from a table with the function below. Public Function DeleteFunction(ByVal mySQLQuery As String, ByVal cmd As SqlCommand) As Boolean Try cmd.Connection = myConnection cmd.CommandText = mySQLQuery cmd.ExecuteNonQuery() Return True Catch ex As Exception WriteErrorMessage("DeleteFunction", ex, Logpath, "SQL Error: " & mySQLQuery) Return False End Try End Function In my application I check the result of the boolean. If it returns True, then i call the function to close the database connection. The returned boolean is True and the requested row is deleted in my database. This means i can close my connection which I do with the function below. Public Sub DatabaseConnClose() myCommand.CommandText = "" myConnection.Close() myCommand = Nothing myConnection = Nothing End Sub After executing this code I receive an error in my logfile from the DeleteFunction. It says: "Connection property has not been initialized." It seems very strange to receive an error from a function that was completely executed, or am i wrong to think that? Can anyone tell me why I receive this error and how I can solve the problem?

    Read the article

  • Database operations through SQL: Database Restore ...

    - by zc0000
    If using sql to perform a database restoring , care the sql-codes very well. Any trivial mistake may prevent a successful execution. Cases are listed here based on simple experiments. with operation: MOVE 'logical_file_name_in_backup' TO 'operating_system_file_name' If logical file name not correctly set , following error is obtained: Logical file 'FILE_NAME' is not part of database 'DATABASE_NAME'. Use RESTORE FILELISTONLY to list the logical file names. RESTORE DATABASE is terminating abnormally. To be continue...

    Read the article

  • Session memory – who’s this guy named Max and what’s he doing with my memory?

    - by extended_events
    SQL Server MVP Jonathan Kehayias (blog) emailed me a question last week when he noticed that the total memory used by the buffers for an event session was larger than the value he specified for the MAX_MEMORY option in the CREATE EVENT SESSION DDL. The answer here seems like an excellent subject for me to kick-off my new “401 – Internals” tag that identifies posts where I pull back the curtains a bit and let you peek into what’s going on inside the extended events engine. In a previous post (Option Trading: Getting the most out of the event session options) I explained that we use a set of buffers to store the event data before  we write the event data to asynchronous targets. The MAX_MEMORY along with the MEMORY_PARTITION_MODE defines how big each buffer will be. Theoretically, that means that I can predict the size of each buffer using the following formula: max memory / # of buffers = buffer size If it was that simple I wouldn’t be writing this post. I’ll take “boundary” for 64K Alex For a number of reasons that are beyond the scope of this blog, we create event buffers in 64K chunks. The result of this is that the buffer size indicated by the formula above is rounded up to the next 64K boundary and that is the size used to create the buffers. If you think visually, this means that the graph of your max_memory option compared to the actual buffer size that results will look like a set of stairs rather than a smooth line. You can see this behavior by looking at the output of dm_xe_sessions, specifically the fields related to the buffer sizes, over a range of different memory inputs: Note: This test was run on a 2 core machine using per_cpu partitioning which results in 5 buffers. (Seem my previous post referenced above for the math behind buffer count.) input_memory_kb total_regular_buffers regular_buffer_size total_buffer_size 637 5 130867 654335 638 5 130867 654335 639 5 130867 654335 640 5 196403 982015 641 5 196403 982015 642 5 196403 982015 This is just a segment of the results that shows one of the “jumps” between the buffer boundary at 639 KB and 640 KB. You can verify the size boundary by doing the math on the regular_buffer_size field, which is returned in bytes: 196403 – 130867 = 65536 bytes 65536 / 1024 = 64 KB The relationship between the input for max_memory and when the regular_buffer_size is going to jump from one 64K boundary to the next is going to change based on the number of buffers being created. The number of buffers is dependent on the partition mode you choose. If you choose any partition mode other than NONE, the number of buffers will depend on your hardware configuration. (Again, see the earlier post referenced above.) With the default partition mode of none, you always get three buffers, regardless of machine configuration, so I generated a “range table” for max_memory settings between 1 KB and 4096 KB as an example. start_memory_range_kb end_memory_range_kb total_regular_buffers regular_buffer_size total_buffer_size 1 191 NULL NULL NULL 192 383 3 130867 392601 384 575 3 196403 589209 576 767 3 261939 785817 768 959 3 327475 982425 960 1151 3 393011 1179033 1152 1343 3 458547 1375641 1344 1535 3 524083 1572249 1536 1727 3 589619 1768857 1728 1919 3 655155 1965465 1920 2111 3 720691 2162073 2112 2303 3 786227 2358681 2304 2495 3 851763 2555289 2496 2687 3 917299 2751897 2688 2879 3 982835 2948505 2880 3071 3 1048371 3145113 3072 3263 3 1113907 3341721 3264 3455 3 1179443 3538329 3456 3647 3 1244979 3734937 3648 3839 3 1310515 3931545 3840 4031 3 1376051 4128153 4032 4096 3 1441587 4324761 As you can see, there are 21 “steps” within this range and max_memory values below 192 KB fall below the 64K per buffer limit so they generate an error when you attempt to specify them. Max approximates True as memory approaches 64K The upshot of this is that the max_memory option does not imply a contract for the maximum memory that will be used for the session buffers (Those of you who read Take it to the Max (and beyond) know that max_memory is really only referring to the event session buffer memory.) but is more of an estimate of total buffer size to the nearest higher multiple of 64K times the number of buffers you have. The maximum delta between your initial max_memory setting and the true total buffer size occurs right after you break through a 64K boundary, for example if you set max_memory = 576 KB (see the green line in the table), your actual buffer size will be closer to 767 KB in a non-partitioned event session. You get “stepped up” for every 191 KB block of initial max_memory which isn’t likely to cause a problem for most machines. Things get more interesting when you consider a partitioned event session on a computer that has a large number of logical CPUs or NUMA nodes. Since each buffer gets “stepped up” when you break a boundary, the delta can get much larger because it’s multiplied by the number of buffers. For example, a machine with 64 logical CPUs will have 160 buffers using per_cpu partitioning or if you have 8 NUMA nodes configured on that machine you would have 24 buffers when using per_node. If you’ve just broken through a 64K boundary and get “stepped up” to the next buffer size you’ll end up with total buffer size approximately 10240 KB and 1536 KB respectively (64K * # of buffers) larger than max_memory value you might think you’re getting. Using per_cpu partitioning on large machine has the most impact because of the large number of buffers created. If the amount of memory being used by your system within these ranges is important to you then this is something worth paying attention to and considering when you configure your event sessions. The DMV dm_xe_sessions is the tool to use to identify the exact buffer size for your sessions. In addition to the regular buffers (read: event session buffers) you’ll also see the details for large buffers if you have configured MAX_EVENT_SIZE. The “buffer steps” for any given hardware configuration should be static within each partition mode so if you want to have a handy reference available when you configure your event sessions you can use the following code to generate a range table similar to the one above that is applicable for your specific machine and chosen partition mode. DECLARE @buf_size_output table (input_memory_kb bigint, total_regular_buffers bigint, regular_buffer_size bigint, total_buffer_size bigint) DECLARE @buf_size int, @part_mode varchar(8) SET @buf_size = 1 -- Set to the begining of your max_memory range (KB) SET @part_mode = 'per_cpu' -- Set to the partition mode for the table you want to generate WHILE @buf_size <= 4096 -- Set to the end of your max_memory range (KB) BEGIN     BEGIN TRY         IF EXISTS (SELECT * from sys.server_event_sessions WHERE name = 'buffer_size_test')             DROP EVENT SESSION buffer_size_test ON SERVER         DECLARE @session nvarchar(max)         SET @session = 'create event session buffer_size_test on server                         add event sql_statement_completed                         add target ring_buffer                         with (max_memory = ' + CAST(@buf_size as nvarchar(4)) + ' KB, memory_partition_mode = ' + @part_mode + ')'         EXEC sp_executesql @session         SET @session = 'alter event session buffer_size_test on server                         state = start'         EXEC sp_executesql @session         INSERT @buf_size_output (input_memory_kb, total_regular_buffers, regular_buffer_size, total_buffer_size)             SELECT @buf_size, total_regular_buffers, regular_buffer_size, total_buffer_size FROM sys.dm_xe_sessions WHERE name = 'buffer_size_test'     END TRY     BEGIN CATCH         INSERT @buf_size_output (input_memory_kb)             SELECT @buf_size     END CATCH     SET @buf_size = @buf_size + 1 END DROP EVENT SESSION buffer_size_test ON SERVER SELECT MIN(input_memory_kb) start_memory_range_kb, MAX(input_memory_kb) end_memory_range_kb, total_regular_buffers, regular_buffer_size, total_buffer_size from @buf_size_output group by total_regular_buffers, regular_buffer_size, total_buffer_size Thanks to Jonathan for an interesting question and a chance to explore some of the details of Extended Event internals. - Mike

    Read the article

  • Print SSRS Report / PDF automatically from SQL Server agent or Windows Service

    - by Jeremy Ramos
    Originally posted on: http://geekswithblogs.net/JeremyRamos/archive/2013/10/22/print-ssrs-report--pdf-from-sql-server-agent-or.aspxI have turned the Web upside-down to find a solution to this considering the least components and least maintenance as possible to achieve automated printing of an SSRS report. This is for the reason that we do not have a full software development team to maintain an app and we have to minimize the support overhead for the support team.Here is my setup:SQL Server 2008 R2 in Windows Server 2008 R2PDF format reports generated by SSRS Reports subscriptions to a Windows File ShareNetwork printerColoured reports with logo and brandingI have found and tested the following solutions to no avail:ProsConsCalling Adobe Acrobat Reader exe: "C:\Program Files (x86)\Adobe\Reader 11.0\Reader\acroRd32.exe" /n /s /o /h /t "C:\temp\print.pdf" \\printserver\printername"Very simple optionAdobe Acrobat reader requires to launch the GUI to send a job to a printer. Hence, this option cannot be used when printing from a service.Calling Adobe Acrobat Reader exe as a process from a .NET console appA bit harder than above, but still a simple solutionSame as cons abovePowershell script(Start-Process -FilePath "C:\temp\print.pdf" -Verb Print)Very simple optionUses default PDF client in quiet mode to Print, but also requires an active session.    Foxit ReaderVery simple optionRequires GUI same as Adobe Acrobat Reader Using the Reporting Services Web service to run and stream the report to an image object and then passed to the printerQuite complexThis is what we're trying to avoid  After pulling my hair out for two days, testing and evaluating the above solutions, I ended up learning more about printers (more than ever in my entire life) and how printer drivers work with PostScripts. I then bumped on to a PostScript interpreter called GhostScript (http://www.ghostscript.com/) and then the solution starts to get clearer and clearer.I managed to achieve a solution (maybe not be the simplest but efficient enough to achieve the least-maintenance-least-components goal) in 3-simple steps:Install GhostScript (http://www.ghostscript.com/download/) - this is an open-source PostScript and PDF interpreter. Printing directly using GhostScript only produces grayscale prints using the laserjet generic driver unless you save as BMP image and then interpret the colours using the imageInstall GSView (http://pages.cs.wisc.edu/~ghost/gsview/)- this is a GhostScript add-on to make it easier to directly print to a Windows printer. GSPrint automates the above  PDF -> BMP -> Printer Driver.Run the GSPrint command from SQL Server agent or Windows Service:"C:\Program Files\Ghostgum\gsview\gsprint.exe" -color -landscape -all -printer "printername" "C:\temp\print.pdf"Command line options are here: http://pages.cs.wisc.edu/~ghost/gsview/gsprint.htmAnother lesson learned is, since you are calling the script from the Service Account, it will not necessarily have the Printer mapped in its Windows profile (if it even has one). The workaround to this is by adding a local printer as you normally would and then map this printer to the network printer. Note that you may need to install the Printer Driver locally in the server.So, that's it! There are many ways to achieve a solution. The key thing is how you provide the smartest solution!

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • Change Tracking

    - by Ricardo Peres
    You may recall my last post on Change Data Control. This time I am going to talk about other option for tracking changes to tables on SQL Server: Change Tracking. The main differences between the two are: Change Tracking works with SQL Server 2008 Express Change Tracking does not require SQL Server Agent to be running Change Tracking does not keep the old values in case of an UPDATE or DELETE Change Data Capture uses an asynchronous process, so there is no overhead on each operation Change Data Capture requires more storage and processing Here's some code that illustrates it's usage: -- for demonstrative purposes, table Post of database Blog only contains two columns, PostId and Title -- enable change tracking for database Blog, for 2 days ALTER DATABASE Blog SET CHANGE_TRACKING = ON (CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON); -- enable change tracking for table Post ALTER TABLE Post ENABLE CHANGE_TRACKING WITH (TRACK_COLUMNS_UPDATED = ON); -- see current records on table Post SELECT * FROM Post SELECT * FROM sys.sysobjects WHERE name = 'Post' SELECT * FROM sys.sysdatabases WHERE name = 'Blog' -- confirm that table Post and database Blog are being change tracked SELECT * FROM sys.change_tracking_tables SELECT * FROM sys.change_tracking_databases -- see current version for table Post SELECT p.PostId, p.Title, c.SYS_CHANGE_VERSION, c.SYS_CHANGE_CONTEXT FROM Post AS p CROSS APPLY CHANGETABLE(VERSION Post, (PostId), (p.PostId)) AS c; -- update post UPDATE Post SET Title = 'First Post Title Changed' WHERE Title = 'First Post Title'; -- see current version for table Post SELECT p.PostId, p.Title, c.SYS_CHANGE_VERSION, c.SYS_CHANGE_CONTEXT FROM Post AS p CROSS APPLY CHANGETABLE(VERSION Post, (PostId), (p.PostId)) AS c; -- see changes since version 0 (initial) SELECT p.Title, c.PostId, SYS_CHANGE_VERSION, SYS_CHANGE_OPERATION, SYS_CHANGE_COLUMNS, SYS_CHANGE_CONTEXT FROM CHANGETABLE(CHANGES Post, 0) AS c LEFT OUTER JOIN Post AS p ON p.PostId = c.PostId; -- is column Title of table Post changed since version 0? SELECT CHANGE_TRACKING_IS_COLUMN_IN_MASK(COLUMNPROPERTY(OBJECT_ID('Post'), 'Title', 'ColumnId'), (SELECT SYS_CHANGE_COLUMNS FROM CHANGETABLE(CHANGES Post, 0) AS c)) -- get current version SELECT CHANGE_TRACKING_CURRENT_VERSION() -- disable change tracking for table Post ALTER TABLE Post DISABLE CHANGE_TRACKING; -- disable change tracking for database Blog ALTER DATABASE Blog SET CHANGE_TRACKING = OFF; You can read about the differences between the two options here. Choose the one that best suits your needs! SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.brushes.Xml.aliases = ['xml']; SyntaxHighlighter.all();

    Read the article

  • Using Find/Replace with regular expressions inside a SSIS package

    - by jamiet
    Another one of those might-be-useful-again-one-day-so-I’ll-share-it-in-a-blog-post blog posts I am currently working on a SQL Server Integration Services (SSIS) 2012 implementation where each package contains a parameter called ETLIfcHist_ID: During normal execution this will get altered when the package is executed from the Execute Package Task however we want to make sure that at deployment-time they all have a default value of –1. Of course, they tend to get changed during development so I wanted a way of easily changing them all back to the default value. Opening up each package in turn and editing them was an option but given that we have over 40 packages and we might want to carry out this reset fairly frequently I needed a more automated method so I turned to Visual Studio’s Find/Replace… feature Of course, we don’t know what value will be in that parameter so I can’t simply search for a particular value; hence I opted to use a regular expression to identify the value to be change. In the rest of this blog post I’ll explain how to do that. For demonstration purposes I have taken the contents of a .dtsx file and stripped out everything except the element containing the parameters (<DTS:PackageParameters>), if you want to play along at home you can copy-paste the XML document below into a new XML file and open it up in Visual Studio: <?xml version="1.0"?> <DTS:Executable xmlns:DTS="www.microsoft.com/SqlServer/Dts">   <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="Some other description"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25845C7E}"       DTS:ObjectName="SomeOtherObjectName">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">SomeOtherValue</DTS:Property>     </DTS:PackageParameter>   </DTS:PackageParameters> </DTS:Executable> We are trying to identify the value of the parameter whose name is ETLIfcHist_ID – notice that in the XML document above that value is VALUE_TO_BE_CHANGED. The following regular expression will find the appropriate portion of the XML document: {\<DTS\:PackageParameter[\n ]*DTS\:CreationName="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Description="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DTSID="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:ObjectName="ETLIfcHist_ID"\>[\n ]*\<DTS\:Property[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Name="ParameterValue"\>}[A-Za-z0-9\:_\{\}- ]*{\<\/DTS\:Property\>} I have highlighted the name of the parameter that we’re looking for. I have also highlighted two portions identified by pairs of curly braces “{…}”; these are important because they pick out the two portions either side of the value I want to replace, in other words the portions highlighted here: <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter> Those sections in the curly braces are termed tag expressions and can be identified in the replace expression using a backslash and a number identifying which tag expression you’re referring to according to its ordinal position. Hence, our replace expression is simply: \1-1\2 We’re saying the portion of our file identified by the regular expression should be replaced by the first curly brace section, then the literal –1, then the second curly brace section. Make sense? Give it a go yourself by plugging those two expressions into Visual Studio’s Find and Replace dialog. If you set it to look in “All Open Documents” then you can open up the code-behind of all your packages and change all of them at once. The Find and Replace dialog will look like this: That’s it! I realise that not everyone will be looking to change the value of a parameter but hopefully I have shown you a technique that you can modify to work for your own scenario. Given that this blog post is, y’know, on the web I have no doubt that someone is going to find a fault with my find regex expression and if that person is you….that’s OK. Let me know about it in the comments below and perhaps we can work together to come up with something better! Note that some parameters may have a different set of properties (for example some, but not all, of my parameters have a DTS:Required attribute) so your find regular expression may have to change accordingly. When researching this I found the following article to be invaluable: Visual Studio Find/Replace Regular Expression Usage @Jamiet

    Read the article

  • Why might login failures cause SQL 2005 to dump and ditch?

    - by Byron Sommardahl
    Our SQL 2005 server began timing out and finally stopped responding on Oct 26th. The application logs showed a ton of 17883 events leading up to a reboot. After the reboot everything was fine but we were still scratching our heads. Fast forward 6 days... it happened again. Then again 2 days later. The last night. Today it has happened three times to far. The timeline is fairly predictable when it happens: Trans log backups. Login failure for "user2". Minidump Another minidump for the scheduler Repeated 17883 events. Server fails little by little until it won't accept any requests. Reboot is all that gets us going again (a band-aid) Interesting, though, is that the server box itself doesn't seem to have any problems. CPU usage is normal. Network connectivity is fine. We can remote in and look at logs. Management studio does eventually bog down, though. Today, for the first time, we tried stopping services instead of a reboot. All services stopped on their own except for the SQL Server service. We finally did an "end task" on that one and were able to bring everything back up. It worked fine for about 30 minutes until we started seeing timeouts and 17883's again. This time, probably because we didn't reboot all the way, we saw a bunch of 844 events mixed in with the 17883's. Our entire tech team here is scratching heads... some ideas we're kicking around: MS Cumulative Update hit around the same time as when we first had a problem. Since then, we've rolled it back. Maybe it didn't rollback all the way. The situation looks and feels like an unhandled "stack overflow" (no relation) in that it starts small and compounds over time. Problem with this is that there isn't significant CPU usage. At any rate, we're not ruling SQL 2005 bug out at all. Maybe we added one too many import processes and have reached our limit on this box. (hard to believe). Looking at SQLDUMP0151.log at the time of one of the crashes. There are some "login failures" and then there are two stack dumps. 1st a normal stack dump, 2nd for a scheduler dump. Here's a snippet: (sorry for the lack of line breaks) 2009-11-10 11:59:14.95 spid63 Using 'xpsqlbot.dll' version '2005.90.3042' to execute extended stored procedure 'xp_qv'. This is an informational message only; no user action is required. 2009-11-10 11:59:15.09 spid63 Using 'xplog70.dll' version '2005.90.3042' to execute extended stored procedure 'xp_msver'. This is an informational message only; no user action is required. 2009-11-10 12:02:33.24 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:02:33.24 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:08:21.12 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:08:21.12 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:13:49.38 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:13:49.38 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:15:16.88 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:15:16.88 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:18:24.41 Logon Error: 18456, Severity: 14, State: 16. 2009-11-10 12:18:24.41 Logon Login failed for user 'standard_user2'. [CLIENT: 50.36.172.101] 2009-11-10 12:18:38.88 spid111 Using 'dbghelp.dll' version '4.0.5' 2009-11-10 12:18:39.02 spid111 *Stack Dump being sent to C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\SQLDump0149.txt 2009-11-10 12:18:39.02 spid111 SqlDumpExceptionHandler: Process 111 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server is terminating this process. 2009-11-10 12:18:39.02 spid111 * ***************************************************************************** 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * BEGIN STACK DUMP: 2009-11-10 12:18:39.02 spid111 * 11/10/09 12:18:39 spid 111 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * Exception Address = 0159D56F Module(sqlservr+0059D56F) 2009-11-10 12:18:39.02 spid111 * Exception Code = c0000005 EXCEPTION_ACCESS_VIOLATION 2009-11-10 12:18:39.02 spid111 * Access Violation occurred writing address 00000000 2009-11-10 12:18:39.02 spid111 * Input Buffer 138 bytes - 2009-11-10 12:18:39.02 spid111 * " N R S C _ P T A 22 00 4e 00 52 00 53 00 43 00 5f 00 50 00 54 00 41 00 2009-11-10 12:18:39.02 spid111 * C _ Q A . d b o . 43 00 5f 00 51 00 41 00 2e 00 64 00 62 00 6f 00 2e 00 2009-11-10 12:18:39.02 spid111 * U s p S e l N e x 55 00 73 00 70 00 53 00 65 00 6c 00 4e 00 65 00 78 00 2009-11-10 12:18:39.02 spid111 * t A c c o u n t 74 00 41 00 63 00 63 00 6f 00 75 00 6e 00 74 00 00 00 2009-11-10 12:18:39.02 spid111 * @ i n t F o r m I 0a 40 00 69 00 6e 00 74 00 46 00 6f 00 72 00 6d 00 49 2009-11-10 12:18:39.02 spid111 * D & 8 @ t x 00 44 00 00 26 04 04 38 00 00 00 09 40 00 74 00 78 00 2009-11-10 12:18:39.02 spid111 * t A l i a s § 74 00 41 00 6c 00 69 00 61 00 73 00 00 a7 0f 00 09 04 2009-11-10 12:18:39.02 spid111 * Ð GQE9732 d0 00 00 07 00 47 51 45 39 37 33 32 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * 2009-11-10 12:18:39.02 spid111 * MODULE BASE END SIZE 2009-11-10 12:18:39.02 spid111 * sqlservr 01000000 02C09FFF 01c0a000 2009-11-10 12:18:39.02 spid111 * ntdll 7C800000 7C8C1FFF 000c2000 2009-11-10 12:18:39.02 spid111 * kernel32 77E40000 77F41FFF 00102000

    Read the article

  • Oral Tradition Check: Two Hundred Meanings for "NULL" in SQL

    - by Thomas L Holaday
    Two decades ago, the topic of "NULL" came up in conversation with a scholarly colleague. As I remember it, he said that C.J.Date, in an essay critical of commercial implementations of SQL, had listed over two hundred meanings for NULL. To my regret, I did not persist the details; but finding that list has since been on my Bucket List. Has anyone else heard this legend? Was it perhaps not Date, but another critic of commercial implementations of SQL?

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >