Search Results

Search found 33316 results on 1333 pages for 'sql team'.

Page 235/1333 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • LINQ to SQL Queries odd Materialization

    - by ptoinson
    I ran across an interesting Linq to SQL, uh, feature, the other day. Perhaps someone can give me a logical explanation for the reasoning behind the results. Take the code below as my example which utilizes the AdventureWorks database setup in a Linq to SQL DataContext. This is a clip from my unit test. The resulting customer returned from a call to both CustomerQuery_Test_01() and CustomerQuery_Test_02() is the same. However, the query executed on the SQLServer are different is a major way. The method CustomerQuery_Test_01 us causing the entire Customer table to be materialized, which the call to CustomerQuery_Test_02 is only causing the single customer to be materialized. The resulting SQL Queries are at the bottom of this post. Anyone have a good reason for this? To me, it was highly non-intuitive. protected virtual Customer GetByPrimaryKey(Func<Customer, bool> keySelection) { AdventureWorksDataContext context = new AdventureWorksDataContext(); return (from r in context.Customers select r).SingleOrDefault(keySelection); } [TestMethod] public void CustomerQuery_Test_01() { Customer customer = GetByPrimaryKey(c => c.CustomerID == 2); } [TestMethod] public void CustomerQuery_Test_02() { AdventureWorksDataContext context = new AdventureWorksDataContext(); Customer customer = (from r in context.Customers select r).SingleOrDefault(c => c.CustomerID == 2); } Query for CustomerQuery_Test_01 (notice the lack of a where clause) SELECT [t0].[CustomerID], [t0].[NameStyle], [t0].[Title], [t0].[FirstName], [t0].[MiddleName], [t0].[LastName], [t0].[Suffix], [t0].[CompanyName], [t0].[SalesPerson], [t0].[EmailAddress], [t0].[Phone], [t0].[PasswordHash], [t0].[PasswordSalt], [t0].[rowguid], [t0].[ModifiedDate] FROM [SalesLT].[Customer] AS [t0] Query for CustomerQuery_Test_02 (notice the where clause) SELECT [t0].[CustomerID], [t0].[NameStyle], [t0].[Title], [t0].[FirstName], [t0].[MiddleName], [t0].[LastName], [t0].[Suffix], [t0].[CompanyName], [t0].[SalesPerson], [t0].[EmailAddress], [t0].[Phone], [t0].[PasswordHash], [t0].[PasswordSalt], [t0].[rowguid], [t0].[ModifiedDate] FROM [SalesLT].[Customer] AS [t0] WHERE [t0].[CustomerID] = @p0

    Read the article

  • How do I introspect on a SQL Server?

    - by MetaHyperBolic
    I have a server with a vendor application which is heavily database-reliant. I need to make some minor changes to the data in a few tables in the database in an automated fashion. Just INSERTs and UPDATEs, nothing fancy. Vendors being vendors, I can never be quite sure when they change the schema of a database during upgrade. To that end, how do I ask the SQL server, in some scriptable fashion, "Hey, does this table still exist? Yeah, cool, okay, but does it have this column? What's the data type and size on that? Is it nullable? Could you give me a list of tables? In this table, could you give me a list of columns? Any primary keys there?" I do not need to do this for the whole schema, only part of it, just a quick check of the database before I launch into things. We have Microsoft SQL Server 2005 on it currently, but it might easily move to Microsoft SQL Server 2008. I am probably not using the correct terminology when searching. I do know that ORM is not only too much overhead for this sort of thing, but also that I have no chance of pitching it to my coworkers.

    Read the article

  • SQL query problem

    - by Brisonela
    Hi, I'm new to StackOverflow, and new to SQL Server, I'd like you to help me with some troublesome query. This is my database structure(It's half spanish, hope doesn't matter) Database My problem is that I don't now how to make a query that states which team is local and which is visitor(using table TMatch, knowing that the stadium belongs to only one team) This is as far as I can get Select P.NroMatch, (select * from fnTeam (P.TeamA)) as TeamA,(select * from fnTeam (P.TeamB)) as TeamB, (select * from fnEstadium (P.CodEstadium)) as Estadium, (cast(P.GolesTeamA as varchar)) + '-' + (cast(P.GolesTeamA as varchar)) as Score, P.Fecha from TMatch P Using this functions: If object_id ('fnTeam','fn')is not null drop function fnTeam go create function fnTeam(@CodTeam varchar(5)) returns table return(Select Name from TTeam where CodTeam = @CodTeam) go select * from fnTeam ('Eq001') go ----**** If object_id ('fnEstadium','fn')is not null drop function fnEstadium go create function fnEstadium(@CodEstadium varchar(5)) returns table return(Select Name from TEstadium where CodEstadium = @CodEstadium) go I hope I'd explained myself well, and I thank you help in advance

    Read the article

  • Is it possible to aggregate over differing where clauses?

    - by BenAlabaster
    Is it possible to calculate multiple aggregates based on differing where clauses? For instance: Let's say I have two tables, one for Invoice and one for InvoiceLineItems. The invoice table has a total field for the invoice total, and each of the invoice line item records in the InvoiceLineItems table contains a field that denotes whether the line item is discountable or not. I want three sum totals, one where Discountable = 0 and one where Discountable = 1 and one where Discountable is irrelevant. Such that my output would be: InvoiceNumber Total DiscountableTotal NonDiscountableTotal ------------- ----- ----------------- -------------------- 1 53.27 27.27 16.00 2 38.94 4.76 34.18 3... The only way I've found so far is by using something like: Select i.InvoiceNumber, i.Total, t0.Total As DiscountableTotal, t1.Total As NonDiscountableTotal From Invoices i Left Join ( Select InvoiceNumber, Sum(Amount), From InvoiceLineItems Where Discountable = 0 Group By InvoiceNumber ) As t0 On i.InvoiceNumber = t0.InvoiceNumber Left Join ( Select InvoiceNumber, Sum(Amount) From InvoiceLineItems Where Discountable = 1 Group By InvoiceNumber ) As t1 On i.InvoiceNumber = t1.InvoiceNumber This seems somewhat cumbersome, it would be nice if I could do something like: Select InvoiceNumber, Sum(Amount) Where Discountable = 1 As Discountable Sum(Amount) Where Discountable = 0 As NonDiscountable Group By InvoiceNumber I realize that SQL is completely invalid, but it logically portrays what I'm trying to do... TIA P.S. I need this to run on a SQL Server 2000 instance, but I am also interested (for future reference) if/how I would achieve this on SQL Server 2005/2008.

    Read the article

  • When to use CTEs to encapsulate sub-results, and when to let the RDBMS worry about massive joins.

    - by IanC
    This is a SQL theory question. I can provide an example, but I don't think it's needed to make my point. Anyone experienced with SQL will immediately know what I'm talking about. Usually we use joins to minimize the number of records due to matching the left and right rows. However, under certain conditions, joining tables cause a multiplication of results where the result is all permutations of the left and right records. I have a database which has 3 or 4 such joins. This turns what would be a few records into a multitude. My concern is that the tables will be large in production, so the number of these joined rows will be immense. Further, heavy math is performed on each row, and the idea of performing math on duplicate rows is enough to make anyone shudder. I have two questions. The first is, is this something I should care about, or will SQL Server intelligently realize these rows are all duplicates and optimize all processing accordingly? The second is, is there any advantage to grouping each part of the query so as to get only the distinct values going into the next part of the query, using something like: WITH t1 AS ( SELECT DISTINCT... [or GROUP BY] ), t2 AS ( SELECT DISTINCT... ), t3 AS ( SELECT DISTINCT... ) SELECT... I have often seen the use of DISTINCT applied to subqueries. There is obviously a reason for doing this. However, I'm talking about something a little different and perhaps more subtle and tricky.

    Read the article

  • Counting character count in Access database column ins SQL

    - by jzr
    Good Evening. My problem is possibly very easy, I just have spent some time researching now and probably have a brain lock and unable to solve this, help would be much appreciated. database structure: col1 col2 col3 col4 ==================== 1233+4566+ABCD+CDEF 1233+4566+ACD1+CDEF 1233+4566+D1AF+CDEF I need to count character count in col3, wanted result in from the previous table would be: char count =========== A 3 B 1 C 2 D 3 F 1 1 2 is this possible to achieve by using SQL only? at the moment I am thinking of passing a parameter in to SQL query and count the characters one by one and then sum, however I did not start the VBA part yet, and frankly wouldn't want to do that. this is my query at the moment: PARAMETERS X Long; SELECT First(Mid(TABLE.col3,X,1)) AS [col3 Field], Count(Mid(TABLE.col3,X,1)) AS Dcount FROM TEST GROUP BY Mid(TABLE.col3,X,1) HAVING (((Count(Mid([TABLE].[col3],[X],1)))>=1)); ideas and help are much appreciated, as being said this is probably very for some of your guys, I don't usually work with access and SQL. Thanks.

    Read the article

  • In sync query calls, one query causing other query to run slower. Why?

    - by Irchi
    Sorry for the long question, but I think this is an interesting situation and I couldn't find any explanations for it: I was involved in optimization of an application that performed a large number of sequential SELECT and INSERT statements on a single dedicated SQL Server database. The process needs to INSERT a large number of records into a table, but for each of them there should be some value mappings, which performed using SELECT statements on another table in the same database. For a specific execution, it took 90 minutes to run. I used a profiler (JProfiler - the application is Java-based) to determine how much time does each part of the application take. It yields that 60% of the time was spent on INSERT method calls, and almost 20% on SELECT calls (the rest distributed in other parts). After some trials, I came to this situation: I commented out the INSERT query that took 60% of the time. I was expecting for the total run time to be around 35 minutes, as I have removed 60% of the 90 minutes. But the whole process took the same 90 minutes (doing only SELECTs and nothing else), but each SELECT took longer this time! Everything was running sync, there were no async calls. And there was only one single thread of execution. SELECT and INSERT queries are very simple, and don't have anything special, and they are on different tables, but on the same DB. I tested with both the DB on the application machine, and on a remote network machine. I can't think of any explanation for this, as the Profiler (Application profiler, not SQL Profiler) reported the changes in the method call times, and by removing INSERT statements SELECT statements took longer to run. Can anyone give me some kind of explanation of what could have happened? (there can't be cache / query optimization stuff, because the queries were run in sync, and in a single thread, and it was far from affecting the cache this much) I should note that the bottleneck of the speed was in SQL server, using most of the CPU time.

    Read the article

  • Access to SQL Server 2005 from a non-domain machine using Windows authentication

    - by user304582
    Hi, I have a Windows domain within which a machine is running SQL Server 2005 and which is configured to support only Windows authentication. I would like to run a C# client application on a machine on the same network, but which is NOT on the domain, and access a database on the SQL Server 2005 instance. I thought that it would be a simple matter of doing something like this: string connectionString = "Data Source=server;Initial Catalog=database;User Id=domain\user;Password=password"; SqlConnection connection = new SqlConnection(connectionString); connection.Open(); However, this fails: the client-side error is: System.Data.SqlClient.SqlException: Login failed for user 'domain\user' and the server-side error is: Error 18456, Severity 14, State 5 I have tried various things including setting integrated security to true and false, and \ instead of \ in the User Id, but without success. In general, I know that it possible to connect to the SQL Server 2005 instance from a non-domain machine (for example, I am working with a Linux-based application which happily does this), but I don't seem to be able to work out how to do it from a Windows machine. Help would be appreciated! Thanks, Martin

    Read the article

  • SQL Server 2008, Books Online, and old documentation...

    - by Chris J
    [I have no idea if stackoverflow really is right right place for this, but don't know how many devs on here run into msi issues with SQL Server; suggest SuperUser or ServerFault if folk think it's better on either of those] About a year ago, when we were looking at moving our codebase forward and migrating to SQL Server 2008, I pulled down a copy of Books Online from the MSDN. Reviewed, did background research, fed results upstream, grabbed Express and tinkered with that. Then we got the nod to move forward (hurrah!) this past couple of weeks. So armed with Developer Edition, and running through the install, I've since found out I've zapped the Books Online MSI, no-ones got a copy of it, and Microsoft only have a later version (Oct 2009) available, so damned if I can update my SQL Server fully and properly... {mutter grumble}. Does anyone know if old versions of Books Online are available for download anywhere? Poking around the Microsoft download centre can't find it, neither is my google-fu finding it. For reference, I'm looking for SQLServer2008_BOL_August2008_ENU.msi ... This may just be a case of good ol' manual delete the files and (try) and clean up the registry :-(

    Read the article

  • Performance considerations for common SQL queries

    - by Jim Giercyk
    Originally posted on: http://geekswithblogs.net/NibblesAndBits/archive/2013/10/16/performance-considerations-for-common-sql-queries.aspxSQL offers many different methods to produce the same results.  There is a never-ending debate between SQL developers as to the “best way” or the “most efficient way” to render a result set.  Sometimes these disputes even come to blows….well, I am a lover, not a fighter, so I decided to collect some data that will prove which way is the best and most efficient.  For the queries below, I downloaded the test database from SQLSkills:  http://www.sqlskills.com/sql-server-resources/sql-server-demos/.  There isn’t a lot of data, but enough to prove my point: dbo.member has 10,000 records, and dbo.payment has 15,554.  Our result set contains 6,706 records. The following queries produce an identical result set; the result set contains aggregate payment information for each member who has made more than 1 payment from the dbo.payment table and the first and last name of the member from the dbo.member table.   /*************/ /* Sub Query  */ /*************/ SELECT  a.[Member Number] ,         m.lastname ,         m.firstname ,         a.[Number Of Payments] ,         a.[Average Payment] ,         a.[Total Paid] FROM    ( SELECT    member_no 'Member Number' ,                     AVG(payment_amt) 'Average Payment' ,                     SUM(payment_amt) 'Total Paid' ,                     COUNT(Payment_No) 'Number Of Payments'           FROM      dbo.payment           GROUP BY  member_no           HAVING    COUNT(Payment_No) > 1         ) a         JOIN dbo.member m ON a.[Member Number] = m.member_no         /***************/ /* Cross Apply  */ /***************/ SELECT  ca.[Member Number] ,         m.lastname ,         m.firstname ,         ca.[Number Of Payments] ,         ca.[Average Payment] ,         ca.[Total Paid] FROM    dbo.member m         CROSS APPLY ( SELECT    member_no 'Member Number' ,                                 AVG(payment_amt) 'Average Payment' ,                                 SUM(payment_amt) 'Total Paid' ,                                 COUNT(Payment_No) 'Number Of Payments'                       FROM      dbo.payment                       WHERE     member_no = m.member_no                       GROUP BY  member_no                       HAVING    COUNT(Payment_No) > 1                     ) ca /********/                    /* CTEs  */ /********/ ; WITH    Payments           AS ( SELECT   member_no 'Member Number' ,                         AVG(payment_amt) 'Average Payment' ,                         SUM(payment_amt) 'Total Paid' ,                         COUNT(Payment_No) 'Number Of Payments'                FROM     dbo.payment                GROUP BY member_no                HAVING   COUNT(Payment_No) > 1              ),         MemberInfo           AS ( SELECT   p.[Member Number] ,                         m.lastname ,                         m.firstname ,                         p.[Number Of Payments] ,                         p.[Average Payment] ,                         p.[Total Paid]                FROM     dbo.member m                         JOIN Payments p ON m.member_no = p.[Member Number]              )     SELECT  *     FROM    MemberInfo /************************/ /* SELECT with Grouping   */ /************************/ SELECT  p.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         COUNT(Payment_No) 'Number Of Payments' ,         AVG(payment_amt) 'Average Payment' ,         SUM(payment_amt) 'Total Paid' FROM    dbo.payment p         JOIN dbo.member m ON m.member_no = p.member_no GROUP BY p.member_no ,         m.lastname ,         m.firstname HAVING  COUNT(Payment_No) > 1   We can see what is going on in SQL’s brain by looking at the execution plan.  The Execution Plan will demonstrate which steps and in what order SQL executes those steps, and what percentage of batch time each query takes.  SO….if I execute all 4 of these queries in a single batch, I will get an idea of the relative time SQL takes to execute them, and how it renders the Execution Plan.  We can settle this once and for all.  Here is what SQL did with these queries:   Not only did the queries take the same amount of time to execute, SQL generated the same Execution Plan for each of them.  Everybody is right…..I guess we can all finally go to lunch together!  But wait a second, I may not be a fighter, but I AM an instigator.     Let’s see how a table variable stacks up.  Here is the code I executed: /********************/ /*  Table Variable  */ /********************/ DECLARE @AggregateTable TABLE     (       member_no INT ,       AveragePayment MONEY ,       TotalPaid MONEY ,       NumberOfPayments MONEY     ) INSERT  @AggregateTable         SELECT  member_no 'Member Number' ,                 AVG(payment_amt) 'Average Payment' ,                 SUM(payment_amt) 'Total Paid' ,                 COUNT(Payment_No) 'Number Of Payments'         FROM    dbo.payment         GROUP BY member_no         HAVING  COUNT(Payment_No) > 1   SELECT  at.member_no 'Member Number' ,         m.lastname ,         m.firstname ,         at.NumberOfPayments 'Number Of Payments' ,         at.AveragePayment 'Average Payment' ,         at.TotalPaid 'Total Paid' FROM    @AggregateTable at         JOIN dbo.member m ON m.member_no = at.member_no In the interest of keeping things in groupings of 4, I removed the last query from the previous batch and added the table variable query.  Here’s what I got:     Since we first insert into the table variable, then we read from it, the Execution Plan renders 2 steps.  BUT, the combination of the 2 steps is only 22% of the batch.  It is actually faster than the other methods even though it is treated as 2 separate queries in the Execution Plan.  The argument I often hear against Table Variables is that SQL only estimates 1 row for the table size in the Execution Plan.  While this is true, the estimate does not come in to play until you read from the table variable.  In this case, the table variable had 6,706 rows, but it still outperformed the other queries.  People argue that table variables should only be used for hash or lookup tables.  The fact is, you have control of what you put IN to the variable, so as long as you keep it within reason, these results suggest that a table variable is a viable alternative to sub-queries. If anyone does volume testing on this theory, I would be interested in the results.  My suspicion is that there is a breaking point where efficiency goes down the tubes immediately, and it would be interesting to see where the threshold is. Coding SQL is a matter of style.  If you’ve been around since they introduced DB2, you were probably taught a little differently than a recent computer science graduate.  If you have a company standard, I strongly recommend you follow it.    If you do not have a standard, generally speaking, there is no right or wrong answer when talking about the efficiency of these types of queries, and certainly no hard-and-fast rule.  Volume and infrastructure will dictate a lot when it comes to performance, so your results may vary in your environment.  Download the database and try it!

    Read the article

  • Upgrading SSIS Custom Components for SQL Server 2012

    Having finally got around to upgrading my custom components to SQL Server 2012, I thought I’d share some notes on the process. One of the goals was minimal duplication, so the same code files are used to build the 2008 and 2012 components, I just have a separate project file. The high level steps are listed below, followed by some more details. Create a 2012 copy of the project file Upgrade project, just open the new project file is VS2010 Change target framework to .NET 4.0 Set conditional compilation symbol for DENALI Change any conditional code, including assembly version and UI type name Edit project file to change referenced assemblies for 2012 Change target framework to .NET 4.0 Open the project properties. On the Applications page, change the Target framework to .NET Framework 4. Set conditional compilation symbol for DENALI Re-open the project properties. On the Build tab, first change the Configuration to All Configurations, then set a Conditional compilation symbol of DENALI. Change any conditional code, including assembly version and UI type name The value doesn’t have to be DENALI, it can actually be anything you like, that is just what I use. It is how I control sections of code that vary between versions. There were several API changes between 2005 and 2008, as well as interface name changes. Whilst we don’t have the same issues between 2008 and 2012, I still have some sections of code that do change such as the assembly attributes. #if DENALI [assembly: AssemblyDescription("Data Generator Source for SQL Server Integration Services 2012")] [assembly: AssemblyCopyright("Copyright © 2012 Konesans Ltd")] [assembly: AssemblyVersion("3.0.0.0")] #else [assembly: AssemblyDescription("Data Generator Source for SQL Server Integration Services 2008")] [assembly: AssemblyCopyright("Copyright © 2008 Konesans Ltd")] [assembly: AssemblyVersion("2.0.0.0")] #endif The Visual Studio editor automatically formats the code based on the current compilation symbols, hence in this case the 2008 code is grey to indicate it is disabled. As you can see in the previous example I have distinct assembly version attributes, ensuring I can run both 2008 and 2012 versions of my component side by side. For custom components with a user interface, be sure to update the UITypeName property of the DtsTask or DtsPipelineComponent attributes. As above I use the conditional compilation symbol to control the code. #if DENALI [DtsTask ( DisplayName = "File Watcher Task", Description = "File Watcher Task", IconResource = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTask.ico", UITypeName = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTaskUI,Konesans.Dts.Tasks.FileWatcherTask,Version=3.0.0.0,Culture=Neutral,PublicKeyToken=b2ab4a111192992b", TaskContact = "File Watcher Task; Konesans Ltd; Copyright © 2012 Konesans Ltd; http://www.konesans.com" )] #else [DtsTask ( DisplayName = "File Watcher Task", Description = "File Watcher Task", IconResource = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTask.ico", UITypeName = "Konesans.Dts.Tasks.FileWatcherTask.FileWatcherTaskUI,Konesans.Dts.Tasks.FileWatcherTask,Version=2.0.0.0,Culture=Neutral,PublicKeyToken=b2ab4a111192992b", TaskContact = "File Watcher Task; Konesans Ltd; Copyright © 2004-2008 Konesans Ltd; http://www.konesans.com" )] #endif public sealed class FileWatcherTask: Task, IDTSComponentPersist, IDTSBreakpointSite, IDTSSuspend { // .. code goes on... } Shown below is another example I found that needed changing. I borrow one of the MS editors, and use it against a custom property, but need to ensure I reference the correct version of the MS controls assembly. This section of code is actually shared between the 2005, 2008 and 2012 versions of my component hence it has test for both DENALI and KATMAI symbols. #if DENALI const string multiLineUI = "Microsoft.DataTransformationServices.Controls.ModalMultilineStringEditor, Microsoft.DataTransformationServices.Controls, Version=11.0.00.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; #elif KATMAI const string multiLineUI = "Microsoft.DataTransformationServices.Controls.ModalMultilineStringEditor, Microsoft.DataTransformationServices.Controls, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; #else const string multiLineUI = "Microsoft.DataTransformationServices.Controls.ModalMultilineStringEditor, Microsoft.DataTransformationServices.Controls, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; #endif // Create Match Expression parameter IDTSCustomPropertyCollection100 propertyCollection = outputColumn.CustomPropertyCollection; IDTSCustomProperty100 property = propertyCollection.New(); property = propertyCollection.New(); property.Name = MatchParams.Name; property.Description = MatchParams.Description; property.TypeConverter = typeof(MultilineStringConverter).AssemblyQualifiedName; property.UITypeEditor = multiLineUI; property.Value = MatchParams.DefaultValue; Edit project file to change referenced assemblies for 2012 We now need to edit the project file itself. Open the MyComponente2012.cproj  in you favourite text editor, and then perform a couple of find and replaces as listed below: Find Replace Comment Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Change the assembly references version from SQL Server 2008 to SQL Server 2012. Microsoft SQL Server\100\ Microsoft SQL Server\110\ Change any assembly reference hint path locations from from SQL Server 2008 to SQL Server 2012. If you use any Build Events during development, such as copying the component assembly to the DTS folder, or calling GACUTIL to install it into the GAC, you can also change these now. An example of my new post-build event for a pipeline component is shown below, which uses the .NET 4.0 path for GACUTIL. It also uses the 110 folder location, instead of 100 for SQL Server 2008, but that was covered the the previous find and replace. "C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\NETFX 4.0 Tools\gacutil.exe" /if "$(TargetPath)" copy "$(TargetPath)" "%ProgramFiles%\Microsoft SQL Server\110\DTS\PipelineComponents" /Y

    Read the article

  • SQL University: What and why of database testing

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 2 – Tools of the trade With that out of the way let us sharpen our pencils and get going. Why test a database The sad state of the industry today is that there is very little emphasis on testing in general. Test driven development is still a small niche of the programming world while refactoring is even smaller. The cause of this is the inability of developers to convince themselves and their managers that writing tests is beneficial. At the moment they are mostly viewed as waste of time. This is because the average person (let’s not fool ourselves, we’re all average) is unable to think about lower future costs in relation to little more current work. It’s orders of magnitude easier to know about the current costs in relation to current amount of work. That’s why programmers convince themselves testing is a waste of time. However we have to ask ourselves what tests are really about? Maybe finding bugs? No, not really. If we introduce bugs, we’re likely to write test around those bugs too. But yes we can find some bugs with tests. The main point of tests is to have reproducible repeatability in our systems. By having a code base largely covered by tests we can know with better certainty what a small code change can break in other parts of the system. By having repeatability we can make code changes with confidence, since we know we’ll see what breaks in other tests. And here comes the inability to estimate future costs. By spending just a few more hours writing those tests we’d know instantly what broke where. Imagine we fix a reported bug. We check-in the code, deploy it and the users are happy. Until we get a call 2 weeks later about a certain monthly process has stopped working. What we don’t know is that this process was developed by a long gone coworker and for some reason it relied on that same bug we’ve happily fixed. There’s no way we could’ve known that. We say OK and go in and fix the monthly process. But what we have no clue about is that there’s this ETL job that relied on data from that monthly process. Now that we’ve fixed the process it’s giving unexpected (yet correct since we fixed it) data to the ETL job. So we have to fix that too. But there’s this part of the app we coded that relies on data from that exact ETL job. And just like that we enter the “Loop of maintenance horror”. With the loop eventually comes blame. Here’s a nice tip for all developers and DBAs out there: If you make a mistake man up and admit to it. All of the above is valid for any kind of software development. Keeping this in mind the database is nothing other than just a part of the application. But a big part! One reason why testing a database is even more important than testing an application is that one database is usually accessed from multiple applications and processes. This makes it the central and vital part of the enterprise software infrastructure. Knowing all this can we really afford not to have tests? What to test in a database Now that we’ve decided we’ll dive into this testing thing we have to ask ourselves what needs to be tested? The short answer is: everything. The long answer is: read on! There are 2 main ways of doing tests: Black box and White box testing. Black box testing means we have no idea how the system internals are built and we only have access to it’s inputs and outputs. With it we test that the internal changes to the system haven’t caused the input/output behavior of the system to change. The most important thing to test here are the edge conditions. It’s where most programs break. Having good edge condition tests we can be more confident that the systems changes won’t break. White box testing has the full knowledge of the system internals. With it we test the internal system changes, different states of the application, etc… White and Black box tests should be complementary to each other as they are very much interconnected. Testing database routines includes testing stored procedures, views, user defined functions and anything you use to access the data with. Database routines are your input/output interface to the database system. They count as black box testing. We test then for 2 things: Data and schema. When testing schema we only care about the columns and the data types they’re returning. After all the schema is the contract to the out side systems. If it changes we usually have to change the applications accessing it. One helpful T-SQL command when doing schema tests is SET FMTONLY ON. It tells the SQL Server to return only empty results sets. This speeds up tests because it doesn’t return any data to the client. After we’ve validated the schema we have to test the returned data. There no other way to do this but to have expected data known before the tests executes and comparing that data to the database routine output. Testing Authentication and Authorization helps us validate who has access to the SQL Server box (Authentication) and who has access to certain database objects (Authorization). For desktop applications and windows authentication this works well. But the biggest problem here are web apps. They usually connect to the database as a single user. Please ensure that that user is not SA or an account with admin privileges. That is just bad. Load testing ensures us that our database can handle peak loads. One often overlooked tool for load testing is Microsoft’s OSTRESS tool. It’s part of RML utilities (x86, x64) for SQL Server and can help determine if our database server can handle loads like 100 simultaneous users each doing 10 requests per second. SQL Profiler can also help us here by looking at why certain queries are slow and what to do to fix them.   One particular problem to think about is how to begin testing existing databases. First thing we have to do is to get to know those databases. We can’t test something when we don’t know how it works. To do this we have to talk to the users of the applications accessing the database, run SQL Profiler to see what queries are being run, use existing documentation to decipher all the object relationships, etc… The way to approach this is to choose one part of the database (say a logical grouping of tables that go together) and filter our traces accordingly. Once we’ve done that we move on to the next grouping and so on until we’ve covered the whole database. Then we move on to the next one. Database Testing is a topic that we can spent many hours discussing but let this be a nice intro to the world of database testing. See you in the next post.

    Read the article

  • Team Foundation Server 2008 - TF220056 Error during installation

    - by David
    I'm attempting to install Team Foundation Server 2008 on a Windows Server 2003 instance that exists under Hyper-V. The SQL Server database itself is held on the root partition of the Hyper-V server and has the Reporting Services installed (so I've solved the TF220059 error already). After hitting "Next " after typing the name of the SQL Server I get this error: --------------------------- Microsoft Visual Studio 2008 Team Foundation Server Setup --------------------------- TF220056: An unrecoverable error occurred while trying to check the status of the Team Foundation database. Installation cannot continue. Check the install log for more details. --------------------------- OK --------------------------- The error log's stack trace makes it look like a bug in the TFS installer itself: [03/22/10,19:14:42] TFSUI: [2] tfsdb.exe: System.IO.IOException: The directory name is invalid. [03/22/10,19:14:42] TFSUI: [2] tfsdb.exe: at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) [03/22/10,19:14:42] TFSUI: [2] tfsdb.exe: at System.IO.__Error.WinIOError() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at System.IO.Path.GetTempFileName() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at Microsoft.TeamFoundation.DatabaseInstaller.CommandLine.Commands.InstallerCommand.get_Log() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at Microsoft.TeamFoundation.DatabaseInstaller.CommandLine.Commands.InstallerCommand.Run() [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: at Microsoft.TeamFoundation.DatabaseInstaller.CommandLine.CommandLine.RunCommand(String[] args) [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe: The directory name is invalid. [03/22/10,19:14:43] TFSUI: [2] tfsdb.exe check failed with error code: 100 I'm running the installer as the domain Administrator, although the server is a Terminal Server in Application Mode, might that be the cause of the problems?

    Read the article

  • SQL -- How to combine three SELECT statements with very tricky requirements

    - by Frederick
    I have a SQL query with three SELECT statements. A picture of the data tables generated by these three select statements is located at www.britestudent.com/pub/1.png. Each of the three data tables have identical columns. I want to combine these three tables into one table such that: (1) All rows in top table (Table1) are always included. (2) Rows in the middle table (Table2) are included only when the values in column1 (UserName) and column4 (CourseName) do not match with any row from Table1. Both columns need to match for the row in Table2 to not be included. (3) Rows in the bottom table (Table3) are included only when the value in column4 (CourseName) is not already in any row of the results from combining Table1 and Table2. I have had success in implementing (1) and (2) with an SQL query like this: SELECT DISTINCT UserName AS UserName, MAX(AmountUsed) AS AmountUsed, MAX(AnsweredCorrectly) AS AnsweredCorrectly, CourseName, MAX(course_code) AS course_code, MAX(NoOfQuestionsInCourse) AS NoOfQuestionsInCourse, MAX(NoOfQuestionSetsInCourse) AS NoOfQuestionSetsInCourse FROM ( "SELECT statement 1" UNION "SELECT statement 2" ) dt_derivedTable_1 GROUP BY CourseName, UserName Where "SELECT statement 1" is the query that generates Table1 and "SELECT statement 2" is the query that generates Table2. A picture of the data table generated by this query is located at www.britestudent.com/pub/2.png. I can get away with using the MAX() function because values in the AmountUsed and AnsweredCorrectly columns in Table1 will always be larger than those in Table2 (and they are identical in the last three columns of both tables). What I fail at is implementing (3). Any suggestions on how to do this will be appreciated. It is tricky because the UserName values in Table3 are null, and because the CourseName values in the combined Table1 and Table2 results are not unique (but they are unique in Table3). After implementing (3), the final table should look like the table in picture 2.png with the addition of the last row from Table3 (the row with the CourseName value starting with "4. Klasse..." I have tried to implement (3) using another derived table using SELECT, MAX() and UNION, but I could not get it to work. Below is my full SQL query with the lines from this failed attempt to implement (3) commented out. Cheers, Frederick PS--I am new to this forum (and new to SQL as well), but I have had more of my previous problems answered by reading other people's posts on this forum than from reading any other forum or Web site. This forum is a great resources. -- SELECT DISTINCT MAX(UserName), MAX(AmountUsed) AS AmountUsed, MAX(AnsweredCorrectly) AS AnsweredCorrectly, CourseName, MAX(course_code) AS course_code, MAX(NoOfQuestionsInCourse) AS NoOfQuestionsInCourse, MAX(NoOfQuestionSetsInCourse) AS NoOfQuestionSetsInCourse -- FROM ( SELECT DISTINCT UserName AS UserName, MAX(AmountUsed) AS AmountUsed, MAX(AnsweredCorrectly) AS AnsweredCorrectly, CourseName, MAX(course_code) AS course_code, MAX(NoOfQuestionsInCourse) AS NoOfQuestionsInCourse, MAX(NoOfQuestionSetsInCourse) AS NoOfQuestionSetsInCourse FROM ( -- Table 1 - All UserAccount/Course combinations that have had quizzez. SELECT DISTINCT dbo.win_user.user_name AS UserName, cast(dbo.GetAmountUsed(dbo.session_header.win_user_id, dbo.course.course_id, dbo.course.no_of_questionsets_in_course) as nvarchar(10)) AS AmountUsed, Isnull(cast(dbo.GetAnswerCorrectly(dbo.session_header.win_user_id, dbo.course.course_id, dbo.question_set.no_of_questions) as nvarchar(10)),0) AS AnsweredCorrectly, dbo.course.course_name AS CourseName, dbo.course.course_code, dbo.course.no_of_questions_in_course AS NoOfQuestionsInCourse, dbo.course.no_of_questionsets_in_course AS NoOfQuestionSetsInCourse FROM dbo.session_detail INNER JOIN dbo.session_header ON dbo.session_detail.session_header_id = dbo.session_header.session_header_id INNER JOIN dbo.win_user ON dbo.session_header.win_user_id = dbo.win_user.win_user_id INNER JOIN dbo.win_user_course ON dbo.win_user_course.win_user_id = dbo.win_user.win_user_id INNER JOIN dbo.question_set ON dbo.session_header.question_set_id = dbo.question_set.question_set_id RIGHT OUTER JOIN dbo.course ON dbo.win_user_course.course_id = dbo.course.course_id WHERE (dbo.session_detail.no_of_attempts = 1 OR dbo.session_detail.no_of_attempts IS NULL) AND (dbo.session_detail.is_correct = 1 OR dbo.session_detail.is_correct IS NULL) AND (dbo.win_user_course.is_active = 'True') GROUP BY dbo.win_user.user_name, dbo.course.course_name, dbo.question_set.no_of_questions, dbo.course.no_of_questions_in_course, dbo.course.no_of_questionsets_in_course, dbo.session_header.win_user_id, dbo.course.course_id, dbo.course.course_code UNION ALL -- Table 2 - All UserAccount/Course combinations that do or do not have quizzes but where the Course is selected for quizzes for that User Account. SELECT dbo.win_user.user_name AS UserName, -1 AS AmountUsed, -1 AS AnsweredCorrectly, dbo.course.course_name AS CourseName, dbo.course.course_code, dbo.course.no_of_questions_in_course AS NoOfQuestionsInCourse, dbo.course.no_of_questionsets_in_course AS NoOfQuestionSetsInCourse FROM dbo.win_user_course INNER JOIN dbo.win_user ON dbo.win_user_course.win_user_id = dbo.win_user.win_user_id RIGHT OUTER JOIN dbo.course ON dbo.win_user_course.course_id = dbo.course.course_id WHERE (dbo.win_user_course.is_active = 'True') GROUP BY dbo.win_user.user_name, dbo.course.course_name, dbo.course.no_of_questions_in_course, dbo.course.no_of_questionsets_in_course, dbo.course.course_id, dbo.course.course_code ) dt_derivedTable_1 GROUP BY CourseName, UserName -- UNION ALL -- Table 3 - All Courses. -- SELECT DISTINCT null AS UserName, -- -2 AS AmountUsed, -- -2 AS AnsweredCorrectly, -- dbo.course.course_name AS CourseName, -- dbo.course.course_code, -- dbo.course.no_of_questions_in_course AS NoOfQuestionsInCourse, -- dbo.course.no_of_questionsets_in_course AS NoOfQuestionSetsInCourse -- FROM dbo.course -- WHERE is_active = 'True' -- ) dt_derivedTable_2 -- GROUP BY CourseName -- ORDER BY CourseName

    Read the article

  • Cannot Create New Team Project TFS2010 TF249063 TF218017

    - by Kodicus
    Server: Windows 2008 R2 Standard Team Foundation Server 2010 WSS 3.0 TFS Configuration: Single Server instalation (including SharePoint) The following error occurs when trying to create a new team project from my local machine. The ://sourcecontrol site and ://sourcecontrol/sites/DefaultCollection/ site appears to be functioning fine and my user is a Site collection administrator on both. I can navigate both sites through a browser on my local machine. Thanks for your help! 2010-04-23T10:01:42 | Module: Internal | Team Foundation Server proxy retrieved | Completion time: 0 seconds 2010-04-23T10:01:42 | Module: Wizard | Retrieved IAuthorizationService proxy | Completion time: 0 seconds 2010-04-23T10:01:42 | Module: Wizard | TF30227: Project creation permissions retrieved | Completion time: 0.109382 seconds 2010-04-23T10:01:42 | Module: Internal | The template information for Team Foundation Server "sourcecontrol\DefaultCollection" was retrieved from the Team Foundation Server. | Completion time: 0.15626 seconds ---begin Exception entry--- Time: 2010-04-23T10:03:24 Module: Wizard Exception Message: TF218017: A SharePoint site could not be created for use as the team project portal. The following error occurred: TF249063: The following Web service is not available: ://sourcecontrol/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. The underlying error is: The underlying connection was closed: A connection that was expected to be kept alive was closed by the server.. Verify that the following URL points to a valid SharePoint Web application and that the application is available: ://sourcecontrol. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. (type TeamFoundationServerException) Exception Stack Trace: at Microsoft.VisualStudio.TeamFoundation.WssSiteCreator.CheckCreateSite(TfsTeamProjectCollection tfsServer, Uri adminUri, Uri siteUri) at Microsoft.VisualStudio.TeamFoundation.WssSiteCreator.ValidateSettings(ProjectCreationContext context) at Microsoft.VisualStudio.TeamFoundation.PortfolioProjectForm.OnFinish() Inner Exception Details: Exception Message: TF249063: The following Web service is not available: ://sourcecontrol/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. The underlying error is: The underlying connection was closed: A connection that was expected to be kept alive was closed by the server.. Verify that the following URL points to a valid SharePoint Web application and that the application is available: ://sourcecontrol. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. (type TeamFoundationServiceUnavailableException) Exception Stack Trace: at Microsoft.TeamFoundation.Client.SharePoint.SharePointTeamFoundationIntegrationService.HandleException(Exception e) at Microsoft.TeamFoundation.Client.SharePoint.SharePointTeamFoundationIntegrationService.CheckUrl(String absolutePath, CheckUrlOptions options, Guid configurationServerId, Guid projectCollectionId) at Microsoft.TeamFoundation.Client.SharePoint.WssUtilities.CheckUrl(ICredentials credentials, Uri adminUrl, Uri siteUrl, CheckUrlOptions options, Guid configurationServerId, Guid projectCollectionId) at Microsoft.TeamFoundation.Client.SharePoint.WssUtilities.CheckCreateSite(TfsConnection tfs, Uri adminUrl, Uri siteUrl) at Microsoft.VisualStudio.TeamFoundation.WssSiteCreator.CheckCreateSite(TfsTeamProjectCollection tfsServer, Uri adminUri, Uri siteUri) Inner Exception Details: Exception Message: The underlying connection was closed: A connection that was expected to be kept alive was closed by the server. (type WebException) Exception Stack Trace: at System.Net.WebRequest.GetResponse() at Microsoft.TeamFoundation.Client.TeamFoundationClientProxyBase.AsyncWebRequest.ExecRequest(Object obj) Inner Exception Details: Exception Message: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. (type IOException) Exception Stack Trace: at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.PooledStream.Read(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Connection.SyncRead(WebRequest request, Boolean userRetrievedStream, Boolean probeRead) Inner Exception Details: Exception Message: An existing connection was forcibly closed by the remote host (type SocketException) Exception Stack Trace: at System.Net.Sockets.Socket.Receive(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size) --- end Exception entry ---

    Read the article

  • sql perfomance on new server

    - by Rapunzo
    My database is running on a pc (AMD Phenom x6, intel ssd disk, 8GB DDR3 RAM and windows 7 OS + sql server 2008 R2 sp3 ) and it started working hard, timeout problems and up to 30 seconds long queries after 200 mb of database And I also have an old server pc (IBM x-series 266: 72*3 15k rpm scsi discs with raid5, 4 gb ram and windows server 2003 + sql server 2008 R2 sp3 ) and same query start to give results in 100 seconds.. I tried query analyser tool for tuning my indexed. but not so much improvements. its a big dissapointment for me. because I thought even its an old server pc it should be more powerfull with 15k rpm discs with raid5. what should I do. do I need $10.000 new server to get a good performance for my sql server? cant I use that IBM server? Extra information: there is 50 sql users and its an ERP program. There is my query ALTER FUNCTION [dbo].[fnDispoTerbiye] ( ) RETURNS TABLE AS RETURN ( SELECT MD.dispoNo, SV.sevkNo, M1.musteriAdi AS musteri, SD.tipTurId, TT.tipTur, SD.tipNo, SD.desenNo, SD.varyantNo, SUM(T.topMetre) AS toplamSevkMetre, MD.dispoMetresi, DT.gelisMetresi, ISNULL(DT.fire, 0) AS fire, SV.sevkTarihi, DT.gelisTarihi, SP.mamulTermin, SD.miktar AS siparisMiktari, M.musteriAdi AS boyahane, MD.akisNotu AS islemler, --dbo.fnAkisIslemleri(MD.dispoNo) DT.partiNo, DT.iplikBoyaId, B.tanimAd AS BoyaTuru, MAX(HD.hamEn) AS hamEn, MAX(HD.hamGramaj) AS hamGramaj, TS.mamulEn, TS.mamulGramaj, DT.atkiCekmesi, DT.cozguCekmesi, DT.fiyat, DV.dovizCins, DT.dovizId, (SELECT CASE WHEN DT.dovizId = 2 THEN CAST(round(SUM(T .topMetre) * DT.fiyat * (SELECT TOP 1 satis FROM tblKur WHERE dovizId = 2 ORDER BY tarih DESC), 2) AS numeric(18, 2)) WHEN DT.dovizId = 3 THEN CAST(round(SUM(T .topMetre) * DT.fiyat * (SELECT TOP 1 satis FROM tblKur WHERE dovizId = 3 ORDER BY tarih DESC), 2) AS numeric(18, 2)) WHEN DT.dovizId = 1 THEN CAST(round(SUM(T .topMetre) * DT.fiyat * (SELECT TOP 1 satis FROM tblKur WHERE dovizId = 1 ORDER BY tarih DESC), 2) AS numeric(18, 2)) END AS Expr1) AS ToplamTLfiyat, DT.aciklama, MD.dispoNotu, SD.siparisId, SD.siparisDetayId, DT.sqlUserName, DT.kayitTarihi, O.orguAd, 'Çözgü=(' + (SELECT dbo.fnTipIplikler(SD.tipTurId, SD.tipNo, SD.desenNo, SD.varyantNo, 1) AS Expr1) + ')' + ' Atki=(' + (SELECT dbo.fnTipIplikler(SD.tipTurId, SD.tipNo, SD.desenNo, SD.varyantNo, 2) AS Expr1) + ')' AS iplikAciklama, DT.prosesOk, dbo.[fnYikamaTalimat](SP.siparisId) yikamaTalimati FROM tblDoviz AS DV WITH(NOLOCK) INNER JOIN tblDispoTerbiye AS DT WITH(NOLOCK) INNER JOIN tblTanimlar AS B WITH(NOLOCK) ON DT.iplikBoyaId = B.tanimId AND B.tanimTurId = 2 ON DV.id = DT.dovizId RIGHT OUTER JOIN tblMusteri AS M1 WITH(NOLOCK) INNER JOIN tblSiparisDetay AS SD WITH(NOLOCK) INNER JOIN tblDispo AS MD WITH(NOLOCK) ON SD.siparisDetayId = MD.siparisDetayId INNER JOIN tblTipTur AS TT WITH(NOLOCK) ON SD.tipTurId = TT.tipTurId INNER JOIN tblSiparis AS SP WITH(NOLOCK) ON SD.siparisId = SP.siparisId ON M1.musteriNo = SP.musteriNo INNER JOIN tblTip AS TP WITH(NOLOCK) ON SD.tipTurId = TP.tipTurId AND SD.tipNo = TP.tipNo AND SD.desenNo = TP.desen AND SD.varyantNo = TP.varyant INNER JOIN tblOrgu AS O WITH(NOLOCK) ON TP.orguId = O.orguId INNER JOIN tblMusteri AS M WITH(NOLOCK) INNER JOIN tblSevkiyat AS SV WITH(NOLOCK) ON M.musteriNo = SV.musteriNo INNER JOIN tblSevkDetay AS SVD WITH(NOLOCK) ON SV.sevkNo = SVD.sevkNo ON MD.mamulDispoHamSevkno = SV.sevkNo LEFT OUTER JOIN tblTop AS T WITH(NOLOCK) INNER JOIN tblDispo AS HD WITH(NOLOCK) ON T.dispoNo = HD.dispoNo AND T.dispoTuruId = HD.dispoTuruId ON SVD.dispoTuruId = T.dispoTuruId AND SVD.dispoNo = T.dispoNo AND SVD.topNo = T.topNo AND MD.siparisDetayId = HD.siparisDetayId ON DT.dispoTuruId = MD.dispoTuruId AND DT.dispoNo = MD.dispoNo LEFT OUTER JOIN tblDispoTerbiyeTest AS TS WITH(NOLOCK) ON DT.dispoTuruId = TS.dispoTuruId AND DT.dispoNo = TS.dispoNo --WHERE DT.gelisTarihi IS NULL -- OR DT.gelisTarihi > GETDATE()-30 GROUP BY MD.dispoNo, DT.partiNo, DT.iplikBoyaId, TS.mamulEn, TS.mamulGramaj, DT.gelisMetresi, DT.gelisTarihi, DT.atkiCekmesi, DT.cozguCekmesi, DT.fire, DT.fiyat, DT.aciklama, DT.sqlUserName, DT.kayitTarihi, SD.tipTurId, TT.tipTur, SD.tipNo, SD.desenNo, SD.varyantNo, SD.siparisId, SD.siparisDetayId, B.tanimAd, M.musteriAdi, M.musteriAdi, M1.musteriAdi, O.orguAd, TP.iplikAciklama, SD.miktar, MD.dispoNotu, SP.mamulTermin, DT.dovizId, DV.dovizCins, MD.dispoMetresi, MD.akisNotu, SV.sevkNo, SV.sevkTarihi, DT.prosesOk,SP.siparisId )

    Read the article

  • Why Do I See the "In Recovery" Msg, and How Can I Prevent it?

    - by John Hansen
    The project I'm working on creates a local copy of the SQL Server database for each SVN branch you work on. We're running SQL Server 2008 Express with Advanced Services on our local machine to host it. When we create a new branch, the build script will create a new database with the ID of that branch, creates the schema objects, and copies over a selection of data from the production shadow server. After the database is created, it, or other databases on the local machine, will often go into "In Recovery" mode for several minutes. After several refreshes it comes up and is happy, but will occasionally go back into "In Recovery" mode. The database is created in simple recovery mode. The file names aren't specified, so it uses default paths for files. The size of the database after loading data is ~400 megs. It is running in SQL Server 2005 compatibility mode. The command that creates the database is: sqlcmd -S $(DBServer) -Q "IF NOT EXISTS (SELECT [name] FROM sysdatabases WHERE [name] = '$(DBName)') BEGIN CREATE DATABASE [$(DBName)]; print 'Created $(DBName)'; END" ...where $(DBName) and $(DBServer) are MSBuild parameters. I got a nice clean log file this morning. When I turned on my computer it starts all five databases. However, two of them show transactions being rolled forward and backwards. The it just keeps trying to start up all five of the databases. 2010-06-10 08:24:59.74 spid52 Starting up database 'ASPState'. 2010-06-10 08:24:59.82 spid52 Starting up database 'CommunityLibrary'. 2010-06-10 08:25:03.97 spid52 Starting up database 'DLG-R8441'. 2010-06-10 08:25:05.07 spid52 2 transactions rolled forward in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:05.14 spid52 0 transactions rolled back in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:05.14 spid52 Recovery is writing a checkpoint in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:11.23 spid52 Starting up database 'DLG-R8979'. 2010-06-10 08:25:12.31 spid36s Starting up database 'DLG-R8441'. 2010-06-10 08:25:13.17 spid52 2 transactions rolled forward in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:13.22 spid52 0 transactions rolled back in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:13.22 spid52 Recovery is writing a checkpoint in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:18.43 spid52 Starting up database 'Rls QA'. 2010-06-10 08:25:19.13 spid46s Starting up database 'DLG-R8979'. 2010-06-10 08:25:23.29 spid36s Starting up database 'DLG-R8441'. 2010-06-10 08:25:27.91 spid52 Starting up database 'ASPState'. 2010-06-10 08:25:29.80 spid41s Starting up database 'DLG-R8979'. 2010-06-10 08:25:31.22 spid52 Starting up database 'Rls QA'. In this case it kept trying to start the databases continuously until I shut down SQL Server at 08:48:19.72, 23 minutes later. Meanwhile, I actually am able to use the databases much of the time.

    Read the article

  • Troubleshooting Application Timeouts in SQL Server

    - by Tara Kizer
    I recently received the following email from a blog reader: "We are having an OLTP database instance, using SQL Server 2005 with little to moderate traffic (10-20 requests/min). There are also bulk imports that occur at regular intervals in this DB and the import duration ranges between 10secs to 1 min, depending on the data size. Intermittently (2-3 times in a week), we face an issue, where queries get timed out (default of 30 secs set in application). On analyzing, we found two stored procedures, having queries with multiple table joins inside them of taking a long time (5-10 mins) in getting executed, when ideally the execution duration ranges between 5-10 secs. Execution plan of the same displayed Clustered Index Scan happening instead of Clustered Index Seek. All required Indexes are found to be present and Index fragmentation is also minimal as we Rebuild Indexes regularly alongwith Updating Statistics. With no other alternate options occuring to us, we restarted SQL server and thereafter the performance was back on track. But sometimes it was still giving timeout errors for some hits and so we also restarted IIS and that stopped the problem as of now." Rather than respond directly to the blog reader, I thought it would be more interesting to share my thoughts on this issue in a blog. There are a few things that I can think of that could cause abnormal timeouts: Blocking Bad plan in cache Outdated statistics Hardware bottleneck To determine if blocking is the issue, we can easily run sp_who/sp_who2 or a query directly on sysprocesses (select * from master..sysprocesses where blocking <> 0).  If blocking is present and consistent, then you'll need to determine whether or not to kill the parent blocking process.  Killing a process will cause the transaction to rollback, so you need to proceed with caution.  Killing the parent blocking process is only a temporary solution, so you'll need to do more thorough analysis to figure out why the blocking was present.  You should look into missing indexes and perhaps consider changing the database's isolation level to READ_COMMITTED_SNAPSHOT. The blog reader mentions that the execution plan shows a clustered index scan when a clustered index seek is normal for the stored procedure.  A clustered index scan might have been chosen either because that is what is in cache already or because of out of date statistics.  The blog reader mentions that bulk imports occur at regular intervals, so outdated statistics is definitely something that could cause this issue.  The blog reader may need to update statistics after imports are done if the imports are changing a lot of data (greater than 10%).  If the statistics are good, then the query optimizer might have chosen to scan rather than seek in a previous execution because the scan was determined to be less costly due to the value of an input parameter.  If this parameter value is rare, then its execution plan in cache is what we call a bad plan.  You want the best plan in cache for the most frequent parameter values.  If a bad plan is a recurring problem on your system, then you should consider rewriting the stored procedure.  You might want to break up the code into multiple stored procedures so that each can have a different execution plan in cache. To remove a bad plan from cache, you can recompile the stored procedure.  An alternative method is to run DBCC FREEPROCACHE which drops the procedure cache.  It is better to recompile stored procedures rather than dropping the procedure cache as dropping the procedure cache affects all plans in cache rather than just the ones that were bad, so there will be a temporary performance penalty until the plans are loaded into cache again. To determine if there is a hardware bottleneck occurring such as slow I/O or high CPU utilization, you will need to run Performance Monitor on the database server.  Hopefully you already have a baseline of the server so you know what is normal and what is not.  Be on the lookout for I/O requests taking longer than 12 milliseconds and CPU utilization over 90%.  The servers that I support typically are under 30% CPU utilization, but your baseline could be higher and be within a normal range. If restarting the SQL Server service fixes the problem, then the problem was most likely due to blocking or a bad plan in the procedure cache.  Rather than restarting the SQL Server service, which causes downtime, the blog reader should instead analyze the above mentioned things.  Proceed with caution when restarting the SQL Server service as all transactions that have not completed will be rolled back at startup.  This crash recovery process could take longer than normal if there was a long-running transaction running when the service was stopped.  Until the crash recovery process is completed on the database, it is unavailable to your applications. If restarting IIS fixes the problem, then the problem might not have been inside SQL Server.  Prior to taking this step, you should do analysis of the above mentioned things. If you can think of other reasons why the blog reader is facing this issue a few times a week, I'd love to hear your thoughts via a blog comment.

    Read the article

  • Kansas City SQL Saturday 2012: BBQ Crawl

    - by Bill Graziano
    The next Kansas City SQL Saturday is coming up on August 4th.  We’ll have the usual SQL Saturday goodness: lots of technical sessions, great networking events and a fantastic speaker dinner.  And we’ll have the Third Annual Kansas City SQL Saturday BBQ Crawl.  On Friday afternoon we’ll visit a few BBQ places in town.  We tend to order big sampler plates and just share everything around.  It’s a great way to try a variety of styles.  This year we’ll be hitting an all new selection of BBQ joints. You don’t need to be a speaker to attend.  However the call for speakers is open until June 28th (hint, hint).  Locals and out-of-towners are all welcome. If you’re interested in attending send me an email and I’ll get you added to the list. We finish in plenty of time to get you to the speaker dinner – as if you could eat any more.

    Read the article

  • vb.net : is it possible to connect to sql server 2008 via odbc but not through vb.net code?

    - by phill
    I'm supporting an old vb.net program whose database it connected to was moved from SQL Server 2005 to SQL Server 2008. Is there a setting on SQL Server 2008 which will allow ODBC connections to access the database but not allow VB.NET to connect to it programmatically? the error i keep receiving in the app is: An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server) however I can connect to it when I create a system dsn to the sql server instance and through VS2005's Tools Connect to Database. Here is the code I'm using to connect: dim strC as string strC = "data source=bob; database=subscribers; user id=bobuser; password=passme" dim connection as New SqlClient.SqlConnection(strC) try connection.open() catch ex as Exception msgbox(ex.message) end try connection.Close()

    Read the article

  • SQL Server 2005 to 2008 Bak file help please!

    - by Brandon
    I have a SQl Server 2005 database backup that I want to transfer to SQL Server 2008 on my server. I spent 3 days transferring the .bak file from my own machine to my server. I then tried to restore the bak file and I got an error. I then read online a completely different method for adding a SQL server 2005 Database to SQL server 2008 which was the detach and attach method which means I need to detach the database in SQL Server 2005 and then transfer the MDF file from it via ftp to my server and then attach it in SQL Server 2008. Well I already used a lot of bandwidth transferring the .bak file to my server. is there a way to convert my .bak file which is already on my server to an MDF file and attach it in SQL server 2008?

    Read the article

  • Error installing TFS in Windows 8

    - by Davi Fiamenghi
    Trying to install TFS on my computer in order to make a demonstration. I can't figure out what else can I do to solve these errors: Information [ System Checks ] TF255142: Windows Firewall is not enabled. If you enable Windows Firewall after configuring Team Foundation Server, you must add exceptions for ports used by Team Foundation Server to Windows Firewall. Error [ Application Tier ] TF255120: Compatibility mode for Internet Information Services (IIS) 6.0 is not enabled. Team Foundation Server requires this on this operating system. Error [ Application Tier ] TF255456: You must configure Internet Information Services (IIS) to use the Static Content component. Team Foundation Server requires the Static Content component in IIS. Error [ Application Tier ] TF255397: Windows Authentication has not been configured as a role service in Internet Information Services (IIS). Team Foundation Server requires that Windows Authentication is installed as one of the role services in IIS. Error [ Application Tier ] TF255397: Windows Authentication has not been configured as a role service in Internet Information Services (IIS). Team Foundation Server requires that Windows Authentication is installed as one of the role services in IIS. Error [ Application Tier ] TF255397: Windows Authentication has not been configured as a role service in Internet Information Services (IIS). Team Foundation Server requires that Windows Authentication is installed as one of the role services in IIS. Here are my IIS features: (I installed and restarted the computer) Everything requested on the errors are installed, running on Windows 8 Consumer Preview Build 8250. IIS is working normally on http: //localhost:80 "Default Application" Please, Am I missing something?

    Read the article

  • TFS 2010 : Unable to add Project to a collection

    - by Scott
    This morning I'm trying to setup Team Foundation Server 2010 to demo for my team. As this is just a demo, I thought I would install it on my Windows 7 machine which also serves as my development machine. My development machine uses Visual Studio 2008 Team Suite. I installed Team Explorer 2008 and then reapplied SP1. Finally I installed and setup TFS 2010. TFS by default gave me administrator privileges. I started up Visual Studios, and connected up to the Collection just fine. However, I'm unable to create a new project and get the follow error message: "TF30172: You are trying to create a team project either without required permissions or with an older version of team Explorer. Contact your project admin..." To check to permissions, I used my home computer which is running Visual Studio 2010. On this machine I was able to connect up to the same TFS instance and create a project no problem. So it looks as though it is a team explorer problem, but everywhere on the web people are saying not only am what I'm trying to do possible, but they have done it themselves. What am I missing to add a project to TFS 2010 under Visual Studio 2008?

    Read the article

  • Few events I&rsquo;m speaking at in early 2013

    - by Mladen Prajdic
    2013 has started great and the SQL community is already brimming with events. At some of these events you can come say hi. I’ll be glad you do! These are the events with dates and locations that I know I’ll be speaking at so far.   February 16th: SQL Saturday #198 - Vancouver, Canada The session I’ll present in Vancouver is SQL Impossible: Restoring/Undeleting a table Yes, you read the title right. No, it's not about the usual "one table per partition" and "restore full backup then copy the data over" methods. No, there are no 3rd party tools involved. Just you and your SQL Server. Yes, it's crazy. No, it's not for production purposes. And yes, that's why it's so much fun. Prepare to dive into the world of data pages, log records, deletes, truncates and backups and how it all works together to get your table back from the endless void. Want to know more? Come and see! This is an advanced level session where we’ll dive into the internals of data pages, transaction log records and page restores.   March 8th-9th: SQL Saturday #194 - Exeter, UK In Exeter I’ll be presenting twice. On the first day I’ll have a full day precon titled: From SQL Traces to Extended Events - The next big switch This pre-con will give you insight into both of the current tracing technologies in SQL Server. The old SQL Trace which has served us well over the past 10 or so years is on its way out because the overhead and details it produces are no longer enough to deal with today's loads. The new Extended Events are a new lightweight tracing mechanism built directly into the SQLOS thus giving us information SQL Trace just couldn't. They were designed and built with performance in mind and it shows. The new Extended Events are a new lightweight tracing mechanism built directly into the SQLOS thus giving us information SQL Trace just couldn't. They were designed and built with performance in mind and it shows. Mastering Extended Events requires learning at least one new skill: XML querying. The second session I’ll have on Saturday titled: SQL Injection from website to SQL Server SQL Injection is still one of the biggest reasons various websites and applications get hacked. The solution as everyone tells us is simple. Use SQL parameters. But is that enough? In this session we'll look at how would an attacker go about using SQL Injection to gain access to your database, see its schema and data, take over the server, upload files and do various other mischief on your domain. This is a fun session that always brings out a few laughs in the audience because they didn’t realize what can be done.   April 23rd-25th: NTK conference - Bled, Slovenia (Slovenian website only) This is a conference with history. This year marks its 18th year running. It’s a relatively large IT conference that focuses on various Microsoft technologies like .Net, Azure, SQL Server, Exchange, Security, etc… The main session’s language is Slovenian but this is slowly changing so it’s becoming more interesting for foreign attendees. This year it’s happening in the beautiful town of Bled in the Alps. The scenery alone is worth the visit, wouldn’t you agree? And this year there are quite a few well known speakers present! Session title isn’t known yet.       May 2nd-4th: SQL Bits XI – Nottingham, UK SQL Bits is the largest SQL Server conference in Europe. It’s a 3 day conference with top speakers and content all dedicated to SQL Server. The session I’ll present here is an hour long version of the precon I’ll give in Exeter. From SQL Traces to Extended Events - The next big switch The session description is the same as for the Exeter precon but we'll focus more on how the Extended Events work with only a brief overview of old SQL Trace architecture.

    Read the article

  • SQLAuthority News SQL Server 2008 R2 Update for Developers Training Kit (March 2010 Update)

    SQL Server 2008 R2 offers an impressive array of capabilities for developers that build upon key innovations introduced in SQL Server 2008. The SQL Server 2008 R2 Update for Developers Training Kit is ideal for developers who want to understand how to take advantage of the key improvements introduced in SQL [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >