Search Results

Search found 32223 results on 1289 pages for 'sql 2012'.

Page 199/1289 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • SQL Server Process Queue Race Condition

    - by William Edmondson
    I have an order queue that is accessed by multiple order processors through a stored procedure. Each processor passes in a unique ID which is used to lock the next 20 orders for its own use. The stored procedure then returns these records to the order processor to be acted upon. There are cases where multiple processors are able to retrieve the same 'OrderTable' record at which point they try to simultaneously operate on it. This ultimately results in errors being thrown later in the process. My next course of action is to allow each processor grab all available orders and just round robin the processors but I was hoping to simply make this section of code thread safe and allow the processors to grab records whenever they like. So Explicitly - Any idea why I am experiencing this race condition and how I can solve the problem. BEGIN TRAN UPDATE OrderTable WITH ( ROWLOCK ) SET ProcessorID = @PROCID WHERE OrderID IN ( SELECT TOP ( 20 ) OrderID FROM OrderTable WITH ( ROWLOCK ) WHERE ProcessorID = 0) COMMIT TRAN SELECT OrderID, ProcessorID, etc... FROM OrderTable WHERE ProcessorID = @PROCID

    Read the article

  • SQL Server Distinct Question

    - by RPS
    I need to be able to select only the first row for each name that has the greatest value. I have a table with the following: id name value 0 JOHN 123 1 STEVE 125 2 JOHN 127 3 JOHN 126 So I am looking to return: id name value 1 STEVE 125 2 JOHN 127 Any idea on the MSSQL Syntax on how to perform this operation?

    Read the article

  • delete all but minimal values, based on two columns in SQL Server table

    - by sqlill
    how to write a statement to accomplish the folowing? lets say a table has 2 columns (both are nvarchar) with the following data col1 10000_10000_10001_10002_10002_10002 col2 10____20____10____30____40_____50 I'd like to keep only the following data: col1 10000_10001_10002 col2 10____10____30 thus removing the duplicates based on the second column values (neither of the columns are primary keys), keeping only those records with the minimal value in the second column. how to accomplish this?

    Read the article

  • Set of Tools to optimize the performance in general of SQL Server

    - by Dave
    Hi, I know there are things out there to help to optimize queries, ect... but is there anything else, something like a full package that can scan your database and highlight all the performance issues, naming conventions, tables not properly normalized, etc? I know this is the job of a DBA and if the DBA is good, he shouldn't need a tool like that, but sometimes you start a new job, you get in charge of an existing database and the DB is a mess, so you don't know where to start... Thanks to everyone Dave

    Read the article

  • similarity between strings - sql server 2005

    - by csetzkorn
    Hi, I am looking for a simple way (UDF?) to establish the similarity between strings. The SOUNDEX and DIFFERENCE function do not seem to do the job. Similarity should be based on number of characters in common (order matters). For example: Spiruroidea sp. AM-2008 and Spiruroidea gen. sp. AM-2008 should be recognised as similar. Any pointers would be very much appreciated. Thanks. Christian

    Read the article

  • Adding Column While Selecting Table in SQl

    - by kmkperumal
    My First Table is ProjectCustomFields CustomFieldId ProjectId CustomFieldName CustomFieldRequired CustomFieldDataType 69 1 User Name 1 0 72 1 City 1 0 74 1 Email 0 0 82 1 Salary 1 2 My Second Table is ProjectCustomFieldValues CustomFieldValueId ProjectId CustomFieldId CustomFieldValue RecordId 35 1 69 kaliya 1 36 1 72 Bangalore 1 37 1 74 [email protected] 1 41 1 69 Yohesh 2 42 1 72 Delhi 2 43 1 74 2 50 1 69 sss 3 51 1 72 Delhi 3 52 1 74 [email protected] 3 57 1 69 Sunil 4 58 1 72 Mumbai 4 59 1 74 [email protected] 4 60 1 82 20000 4 I tried Below Query Select M.CustomFieldName,N.CustomFieldValue,N.RecordId From (Select G.CustomFieldName,H.RecordId From (Select CustomFieldName From ProjectCustomFields Where ProjectId=1) G Cross Join (Select Distinct RecordId From ProjectCustomFieldValues) H) M Left Join (Select CustFiled.CustomFieldName,CustValue.CustomFieldValue,CustValue.RecordId From ProjectCustomFieldValues CustValue Left Join ProjectCustomFields CustFiled On CustValue.CustomFieldId=CustFiled.CustomFieldId Where CustValue.AuctionId=1 ) N On M.CustomFieldName=N.CustomFieldName And M.RecordId=N.RecordId But I got the result below #CustomFieldName# CustomFieldValue RecordId User Name kaliya 1 City Bangalore 1 Email [email protected] 1 Salary NULL **NULL** User Name Yohesh 2 City Delhi 2 Email 2 Salary NULL **NULL** User Name sss 3 City Delhi 3 Email [email protected] 3 Salary NULL **NULL** User Name NULL **NULL** City NULL **NULL** Email NULL **NULL** Salary NULL **NULL** User Name Sunil 4 City Mumbai 4 Email [email protected] 4 Salary 20000 4 But Expected Result is CustomFieldName CustomFieldValue RecordId User Name kaliya 1 City Bangalore 1 Email [email protected] 1 Salary NULL **1** User Name Yohesh 2 City Delhi 2 Email 2 Salary NULL **2** User Name sss 3 City Delhi 3 Email [email protected] 3 Salary NULL **3** User Name Sunil 4 City Mumbai 4 Email [email protected] 4 Salary 20000 4 Please guide me some one,I tried so much but i got null value in recordId,So I need same recordId above one..

    Read the article

  • SQL Self Join Query Help

    - by hdoe123
    Hi All, I'm trying to work out a query that self join itself on a table using the eventnumber. I've never done a self join before. What i'm trying to query is when a client has started off in a city which is chester to see what city they moved to. But I dont want to be able to see if they started off in another city. I would also like be only see the move once (So i'd only like to see if they went from chester to london rather then chester to london to wales) The StartTimeDate is the same EndDateTime if they moved to another city. Data example as follows if they started off in the city chester :- clientid EventNumber City StartDateTime EndDateTime 1 1 Chester 10/03/2009 11/04/2010 13:00 1 1 Liverpool 11/04/2010 13:00 30/06/2010 16:00 1 1 Wales 30/07/2010 16:00 the result I would like to see is on the 2nd row - so it only shows me liverpool. Could anyone point in the right direcetion please?

    Read the article

  • Distributed Transactions in SQL Server 2005

    - by AJM
    As part of a transaction I’m modifying rows in tables via a server link, so have to specify “SET XACT_ABORT ON” in my sproc otherwise it won’t execute. Now I’m noticing that SCOPE_IDENTITY() is returning NULL, which is presumably something to do with the distributed transaction scope? Does anyone know why and how to resolve?

    Read the article

  • Need some help understanding IO Statistics

    - by Abe Miessler
    I have a query that has a very costly INDEX SEEK operation in the execution plan. In order to track down the cause i set IO STATISTICS on and ran it. In the problem section it gave the following statistics: Table '#TempStudents_Enrollment2_____________________________________000000004D5F'. Scan count 0, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#TempRace2______________________________________________000000004D58'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'RefRace'. Scan count 120, logical reads 240, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'RefFedEnctyRaceCatg'. Scan count 18, logical reads 36, physical reads 2, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#43B0BA0F'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#42BC95D6'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#41C8719D'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#40D44D64'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#LEA2_________________________________________________000000004D56'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#39332B9C'. Scan count 1, logical reads 60, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#School2________________________________________________000000004D57'. Scan count 1, logical reads 29164, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#GenderKey______________________________________________000000004D5A'. Scan count 1, logical reads 29164, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#LangAcqKey_____________________________________________000000004D5B'. Scan count 1, logical reads 29164, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#TransferCatKey___________________________________________000000004D5C'. Scan count 1, logical reads 29164, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#ResCatKey______________________________________________000000004D5D'. Scan count 1, logical reads 29164, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'RPT_SnapShot_1_4_StuPgm_Denorm'. Scan count 2344954, logical reads 4992518, physical reads 16, read-ahead reads 8, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#3FE0292B'. Scan count 1, logical reads 2344954, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'RPT_SnapShot_1_4_StuEnrlmt_Denorm'. Scan count 20, logical reads 87679, physical reads 0, read-ahead reads 87425, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#GradeKey_______________________________________________000000004D59'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. What should I look for in here when i'm looking to improve the performance? The line with over 2 million for the Scan count looked suspicious to me but I really don't know. Does anyone see anything here that i should look into in more detail?

    Read the article

  • Help with simple SQL Server query

    - by Bram
    I have to tables as follows; Employees: Name nvarchar(50), Job Title nvarchar(50) and Salary int. Employers: Name nvarchar(50), Job Title nvarchar(50) I would like to select every item from the 'Employers' table where 'Job Title' does NOT show up in the 'Employees' table. I know this is a simple query but it has me stumped. I'd be grateful for any help. Thanks.

    Read the article

  • exec problem in sql 2005

    - by IordanTanev
    Hi, i have the situation where i have two databases whith same structure. The first have some data in its datatables. I need to create a script that will transfer the data from the first database to the second. I have created this script. DECLARE @table_name nvarchar(MAX), @query nvarchar(MAX) DECLARE @table_cursor CURSOR SET @table_cursor = CURSOR FAST_FORWARD FOR Select TABLE_NAME FROM INFORMATION_SCHEMA.TABLES OPEN @table_cursor FETCH NEXT FROM @table_cursor INTO @table_name WHILE @@FETCH_STATUS = 0 BEGIN SET @query = 'INSERT INTO ' + @table_name + ' SELECT * FROM MyDataBase.dbo.' + @table_name print @query exec @query FETCH NEXT FROM @table_cursor INTO @table_name END CLOSE @table_cursor DEALLOCATE @table_cursor The problem is that when i run th script the "print @query" statement prints statement like this INSERT INTO table SELECT * FROM MyDataBase.dbo.table When i copy this and run it from Management studio it works fine. But when the script trys to run it with exec i get this error Msg 911, Level 16, State 1, Line 21 Could not locate entry in sysdatabases for database 'INSERT INTO table SELECT * FROM MPDEV090314'. No entry found with that name. Make sure that the name is entered correctly. Hope someone can tell me whot is wront with this. Best Regards, Iordan Tanev

    Read the article

  • I get an error when implementing tde in SQL Server 2008

    - by mahima
    While using USE mssqltips_tde; CREATE DATABASE ENCRYPTION KEY with ALGORITHM = AES_256 ENCRYPTION BY SERVER CERTIFICATE TDECert GO I'm getting an error Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'KEY'. Msg 319, Level 15, State 1, Line 3 Incorrect syntax near the keyword 'with'. If this statement is a common table expression or an xmlnamespaces clause, the previous statement must be terminated with a semicolon. please help in resolving the same as i need to implement Encryption on my DB

    Read the article

  • Why decimal behave differently?

    - by haansi
    Hello, I am doing this small exercise. declare @No decimal(38,5); set @No=12345678910111213.14151; select @No*1000/1000,@No/1000*1000,@No; Results are: 12345678910111213.141510 12345678910111213.141000 12345678910111213.14151 Why are the results of first 2 selects different when mathematically it should be same?

    Read the article

  • Trying to verify understanding of foreign keys SQL Server

    - by msarchet
    So I'm working on just a learning project to expose myself to doing some things I do not get to do at work. I'm just making a simple bug and case tracking app (I know there are a million this is just to work with some tools I don't get to). So I was designing my database and realized I've never actually used Foreign Keys before in any of my projects, I've used them before but never actually setting up a column as a FK. So I've designed my database as follows, which I think is close to correct (at least for the initial layout). However When I try to add the FK's to the linking Tables I get an error saying, "The tables present in the relationship must have the same number of columns". I'm doing this by in SQLSMS by going to the Keys 'folder' and adding a FK. Is there something that I am doing wrong here, I don't understand why the tables would have to have the same number of columns for me to add a FK relationship between the tables?

    Read the article

  • How to improve performance of non-scalar aggregations on denormalized tables

    - by The Lazy DBA
    Suppose we have a denormalized table with about 80 columns, and grows at the rate of ~10 million rows (about 5GB) per month. We currently have 3 1/2 years of data (~400M rows, ~200GB). We create a clustered index to best suit retrieving data from the table on the following columns that serve as our primary key... [FileDate] ASC, [Region] ASC, [KeyValue1] ASC, [KeyValue2] ASC ... because when we query the table, we always have the entire primary key. So these queries always result in clustered index seeks and are therefore very fast, and fragmentation is kept to a minimum. However, we do have a situation where we want to get the most recent FileDate for every Region, typically for reports, i.e. SELECT [Region] , MAX([FileDate]) AS [FileDate] FROM HugeTable GROUP BY [Region] The "best" solution I can come up to this is to create a non-clustered index on Region. Although it means an additional insert on the table during loads, the hit isn't minimal (we load 4 times per day, so fewer than 100,000 additional index inserts per load). Since the table is also partitioned by FileDate, results to our query come back quickly enough (200ms or so), and that result set is cached until the next load. However I'm guessing that someone with more data warehousing experience might have a solution that's more optimal, as this, for some reason, doesn't "feel right".

    Read the article

  • SQL Joins Excluding Data

    - by Andrew
    Say I have three tables: Fruit (Table 1) ------ Apple Orange Pear Banana Produce Store A (Table 2 - 2 columns: Fruit for sale => Price) ------------------------- Apple => 1.00 Orange => 1.50 Pear => 2.00 Produce Store B (Table 3 - 2 columns: Fruit for sale => Price) ------------------------ Apple => 1.10 Pear => 2.50 Banana => 1.00 If I would like to write a query with Column 1: the set of fruit offered at Produce Store A UNION Produce Store B, Column 2: Price of the fruit at Produce Store A (or null if that fruit is not offered), Column 3: Price of the fruit at Produce Store B (or null if that fruit is not offered), how would I go about joining the tables? I am facing a similar problem (with more complex tables), and no matter what I try, if the "fruit" is not at "produce store a" but is at "produce store b", it is excluded (since I am joining produce store a first). I have even written a subquery to generate a full list of fruits, then left join Produce Store A, but it is still eliminating the fruits not offered at A. Any Ideas?

    Read the article

  • Sql server table can be queried but not updated

    - by Nigel
    i have a table which was always updatable before, but then suddenly i can no longer update the any of the columns in the table. i can still query the whole table and the results come back very fast, but the moment i try to update a column in the table, the update query simply stalls and does nothing. i tried using select req_transactionUOW from master..syslockinfo where req_spid = -2 to see if some orphaned transaction was locking the table, but it returns no results. i can't seems to find signs of my table being locked, but simply cannot update it. any clues as to how to fix the table or whatever state it is in?

    Read the article

  • SQL Server 2008: CASE vs IF-ELSE-IF vs GOTO

    - by Saharsh Shah
    I have some rules in my application and I have written the business logic of that rules in my procedure. At the time of creation of procedure I came to know that CASE statement won't work in my scenario. So I have tried two ways to perform same operations (using IF-ELSE-IF or GOTO) shown as below. Method 1 Using IF-ELSE-IF conditions: DECLARE @V_RuleId SMALLINT; IF (@V_RuleId = 1) BEGIN /*My business logic*/ END ELSE IF (@V_RuleId = 2) BEGIN /*My business logic*/ END ELSE IF (@V_RuleId = 3) BEGIN /*My business logic*/ END /* ... ... ... ...*/ ELSE IF (@V_RuleId = 19) BEGIN /*My business logic*/ END ELSE IF (@V_RuleId = 20) BEGIN /*My business logic*/ END Method 2 Using GOTO statement: DECLARE @V_RuleId SMALLINT, @V_Temp VARCHAR(100); SET @V_Temp = 'GOTO RULE' + CONVERT(VARCHAR, @V_RuleId); EXECUTE sp_executesql @V_Temp; RULE1: BEGIN /*My business logic*/ END RULE2: BEGIN /*My business logic*/ END RULE3: BEGIN /*My business logic*/ END /* ... ... ... ...*/ RULE19: BEGIN /*My business logic*/ END RULE20: BEGIN /*My business logic*/ END Today I have 20 rules. It can be increase to any number in future. If I can able to use CASE statement then I have not any problem with performance, but I can't do that so I am worried about the performance of my procedure. Also one thing to be noticed that this procedure will execute very frequently by application. My questions are: Is there any way to use CASE statement in my procedure? If not, which method is best to use in my procedure to improve the performance of my code? Thanks in advance...

    Read the article

  • SQL Server view: how to add missing rows using interpolation

    - by Christopher Klein
    Running into a problem. I have a table defined to hold the values of the daily treasury yield curve. It's a pretty simple table used for historical lookup of values. There are notibly some gaps in the table on year 4, 6, 8, 9, 11-19 and 21-29. The formula is pretty simple in that to calculate year 4 it's 0.5*Year3Value + 0.5*Year5Value. The problem is how can I write a VIEW that can return the missing years? I could probably do it in a stored procedure but the end result needs to be a view.

    Read the article

  • Backing Up Transaction Logs to Tape?

    - by David Stein
    I'm about to put my database in Full Recovery Model and start taking transaction log backups. I am taking a full nightly backup to another server and later in the evening this file and many others are backed up to tape. My question is this. I will take hourly (or more if necessary) t-log backups and store them on the other server as well. However, if my full backups are passing DBCC and integrity checks, do I need to put my T-Logs on tape? If someone wants point in time recovery to yesterday at 2pm, I would need the previous full backup and the transaction logs. However, other than that case, if I know my full back ups are good, is there value in keeping the previous day's transaction log backups?

    Read the article

  • t-sql grouping query

    - by stackoverflowuser
    Hi based on the following table Name --------- A A A B B C C C I want to add another column to this table called 'OnGoing' and the values should alternate for each group of names. There are only two values 'X' and 'Y'. So the table will look like Name OnGoing ---------------- A X A X A X B Y B Y C X C X C X how to write such a query that can alternate the values for each group of names.

    Read the article

  • Selecting fields in SQL Select statements (Dumbest SQL Question)

    - by JC
    Hello all, Here's a dumb question which I can't find an answer to: I have a table which contains 20 fields, a few of which are date/time. I am interested in pulling all the fields. I would like to pull the datetime fields using the to_char function but don't want to individually list out all the other fields. Is there an easy way to do this? I tried select *, tochar(dtfield) as dt2 and select tochar(dtfield) as dt2, * and both give errors. Thanks for all your help! JC

    Read the article

  • Dynamically call a stored procedure from another stored procedure

    - by Greg
    I want to be able to pass in the name of a stored procedure as a string into another stored procedure and have it called with dynamic parameters. I'm getting an error though. Specifically I've tried: create procedure test @var1 varchar(255), @var2 varchar(255) as select 1 create procedure call_it @proc_name varchar(255) as declare @sp_str varchar(255) set @sp_str = @proc_name + ' ''a'',''b''' print @sp_str exec @sp_str exec call_it 'test' So procedure call_it should call procedure test with arguments 'a', and 'b'. When I run the above code I get: Msg 2812, Level 16, State 62, Procedure call_it, Line 6 Could not find stored procedure 'test 'a','b''. However, running test 'a','b' works fine.

    Read the article

  • Help with complex sql query

    - by eugeneK
    To make story short, i'm building self-learning banner management system. Users will be able to insert these banners to their site when banners will be shown based on sales/impressions ratio. I have 4 tables Banners bannerID int bannerImage varchar.... SmartBanners smartBannerID int smartBannerArrayID int bannerID int impressionsCount int visibility tinyint (percents) SmartBannerArrays smartBannerArrayID int userID int Statistics bannerID int saleAmountPerDay decimal... Each night i need to generate new "visibility" for each SmartBanner based on whole SmartBannerArray that same user has. So i need to get sum of impressions and sales for each bannerID in SmartBannerArray. All comes to my mind is to use double cursor, one will loop thought SmartBannerArrays get needed values for sum of impressions and sales and then inner loop which will access each SmartBanner and change it's "visibility" percentage based on (sales/impressions)/(sumOfSales/sumOfImpressions)*100 Hope you get the picture... Is there any other way to design better tables or not to use double cursor to avoid server overload ?

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >