Search Results

Search found 31931 results on 1278 pages for 'sql statement'.

Page 122/1278 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Wait Statistics in Microsoft SQL Server

    - by KKline
    When it comes to troubleshooting in relational databases, there's no better place to start than wait statistics. In a nutshell, a wait statistic is an internal counter that tells you how long the database spent waiting for a particular resource, activity, or process. Since wait statistics are categorized by type, one look will quickly tell the variety of problem that needs your attention, assuming you know meaning for Microsoft's lingo for each wait type....(read more)

    Read the article

  • ServerName no longer works as SQL Server 2008 Default instance name?

    - by TomK
    Folks, Somehow in struggling to install something (Sage MIP), I have unsettled my system's SQL Server 2008 instance names. If I fired up SSMS and connected to "MySystemName" (while logged into MySystemName), I used to immediately connect and could view my databases. Now if I connect to "(local)" or "." I connect just fine...but "MySystemName" gives me a timeout error. If I log in with "." and do "SELECT @@SERVERNAME" it comes back with "MySystemName" just fine. I've double checked that my network name is in the Security\Logins list, and it is, as a dbadmin. Any suggestions? I thought that having SQL Server 2005 Express might be a problem (the "battle of the default instances"), so I uninstalled that. No better.

    Read the article

  • TFS SQL Deployment Data Script

    - by Greg
    We are using TFS and SQL 2005 (looking to upgrade to SQL 2012 if that makes a difference). We store our database schema in a Visual Studio Database project (VS 2010). When code is released to live we currently use the Visual Studio Database Project to build a script for all our schema changes. The problem we have been getting is having to alter or add to that script to add/fix data for the deployment. For example if we add a new non-nullable column to an existing table we need to populate that column with data during the insert. Other times we may want to create new records in transactional tables (e.g. assign specific users to a new security access). Do Visual Studio Database Projects have a way to store these scripts that only need to be run once and somehow include them in the build? Does it know which scripts need to be run (for example if we are inserting default data we don't want to do that again a second time)? OR Is there a better way to manage these scripts?

    Read the article

  • SQL language drawbacks, The Third Manifesto

    - by David Portabella
    Sometime ago I read about SQL language drawbacks (the basic language specification, not vendor specific), and one of the drawbacks was that the language does not allow to create a set of tuples that don't come from a table. For instance, SELECT firstName, lastName from people; this creates a set of tuples coming from the table people. Now, if I don't have this table people, and I want to return a constant, I'd need something like this to return a set of two tuples (this would not require to have a table): SELECT VALUES('james', 'dean'), ('tom', 'cruisse'); Why I would need that? Because of the same reasons that we can define constants (not only basic types, but objects and arrays also) in any advanced programming language. Workarounds, Yes, I could create a temporal table, fill the data, and SELECT from that table. This is a hack, to overcome the drawbacks of the poor SQL language. I think that I read about this somewhere in "The Third Manifesto", but I don't find the paragraph/example talking about this concrete drawback anymore. Do you know a reference about it?

    Read the article

  • June 2013 release of SSDT contains a minor bug that you should be aware of

    - by jamiet
    I have discovered what seems, to me, like a bug in the June 2013 release of SSDT and given the problems that it created yesterday on my current gig I thought it prudent to write this blog post to inform people of it. I’ve built a very simple SSDT project to reproduce the problem that has just two tables, [Table1] and [Table2], and also a procedure [Procedure1]: The two tables have exactly the same definition, both a have a single column called [Id] of type integer. CREATE TABLE [dbo].[Table1] (     [Id] INT NOT NULL PRIMARY KEY ) My stored procedure simply joins the two together, orders them by the column used in the join predicate, and returns the results: CREATE PROCEDURE [dbo].[Procedure1] AS     SELECT t1.*     FROM    Table1 t1     INNER JOIN Table2 t2         ON    t1.Id = t2.Id     ORDER BY Id Now if I create those three objects manually and then execute the stored procedure, it works fine: So we know that the code works. Unfortunately, SSDT thinks that there is an error here: The text of that error is: Procedure: [dbo].[Procedure1] contains an unresolved reference to an object. Either the object does not exist or the reference is ambiguous because it could refer to any of the following objects: [dbo].[Table1].[Id] or [dbo].[Table2].[Id]. Its complaining that the [Id] field in the ORDER BY clause is ambiguous. Now you may well be thinking at this point “OK, just stick a table alias into the ORDER BY predicate and everything will be fine!” Well that’s true, but there’s a bigger problem here. One of the developers at my current client installed this drop of SSDT and all of a sudden all the builds started failing on his machine – he had errors left right and centre because, as it transpires, we have a fair bit of code that exhibits this scenario.  Worse, previous installations of SSDT do not flag this code as erroneous and therein lies the rub. We immediately had a mass panic where we had to run around the department to our developers (of which there are many) ensuring that none of them should upgrade their SSDT installation if they wanted to carry on being productive for the rest of the day. Also bear in mind that as soon as a new drop of SSDT comes out then the previous version is instantly unavailable so rolling back is going to be impossible unless you have created an administrative install of SSDT for that previous version. Just thought you should know! In the grand schema of things this isn’t a big deal as the bug can be worked around with a simple code modification but forewarned is forearmed so they say! Last thing to say, if you want to know which version of SSDT you are running check my blog post Which version of SSDT Database Projects do I have installed? @Jamiet

    Read the article

  • When adding second processor to SQL Server, will it automatically balance the load?

    - by ddavis
    We have a SQL Server 2008 R2 (10.5) on a dedicated box with a single 2.4Ghz processor, which regularly runs at 70-80% CPU. We are going to be adding a significant number of users to the application and therefore want to add a second processor to the box (scale up). Will SQL Server automatically use the second processor to balance threads, or is there additional configuration that will need to be done? In other words, will adding the second processor drop my CPU usage to 35-40% per CPU, automatically balancing the load? Based on what I read here, it seems that it will: http://msdn.microsoft.com/en-us/library/ms181007.aspx However, I've read elsewhere that CPU performance gains can be made by assigning database tables to different filegroups, but I'm not sure we want to get that complicated at this point.

    Read the article

  • SQL database testing: How to capture state of my database for rollback.

    - by Rising Star
    I have a SQL server (MS SQL 2005) in my development environment. I have a suite of unit tests for some .net code that will connect to the database and perform some operations. If the code under test works correctly, then the database should be in the same (or similar) state to how it was before the tests. However, I would like to be able to roll back the database to its state from before the tests run. One way of doing this would be to programmatically use transactions to roll back each test operation, but this is difficult and cumbersome to program; it could easily lead to errors in the test code. I would like to be able to run my tests confidently knowing that if they destroy my tables, I can quickly restore them? What is a good way to save a snapshot of one of my databases with its tables so that I can easily restore the database to it's state from before the test?

    Read the article

  • Best method to implement a filtered search

    - by j0N45
    I would like to ask you, your opinion when it comes to implement a filtered search form. Let's imagine the following case: 1 Big table with lots of columns It might be important to say that this SQL Server You need to implement a form to search data in this table, and in this form you'll have several check boxes that allow you to costumize this search. Now my question here is which one of the following should be the best way to implement the search? Create a stored procedure with a query inside. This stored procedure will check if the parameters are given by the application and in the case they are not given a wildcard will be putted in the query. Create a dynamic query, that is built accordingly to what is given by the application. I am asking this because I know that SQL Server creates an execution plan when the stored procedure is created, in order to optimize its performance, however by creating a dynamic query inside of the stored procedure will we sacrifice the optimization gained by the execution plan? Please tell me what would be the best approach in your oppinion.

    Read the article

  • SQL*Plus??? - SPOOL??????(????? ???Tips-5)

    - by Yuichi.Hayashi
    SQL??????????????????????????????????????SPOOL?????????????? ???SPOOL??????????????????????????????? ????????? ????????????????????????????????????????????????????????????????????    $ sqlplus @sample.sql > /dev/null ??????????????????????????????? ??????????SPOOL????????????????? arraysize???????? arraysize???????SELECT??????fetch?????????SQL*Plus????????? ????????SPOOL????????SELECT???????????? SPOOL????     SQL> set array[size] 100 ????????fetch?100?????????????15???? ???????????????????????????????????????????????????? ????????????????????????????????????????????????????????????? ?????????????????????????????????????????????? ?????????? SPOOL?????????????1?????linesize???????????????????????????????? ????????????????????????????????????????? ????????????SPOOL????    SQL> set trims[pool] on ????????????off???????????????? ?????????? ???????????????SQL*Plus????????????1????????????????????????????????????????????????????????? ??????????????????????   (Written by Hiroyuki Nakaie) ?????SQL*Plus TIPS???????????????? Oracle SQL*Plus - ??, ??, ?????

    Read the article

  • ???????/????????????????? SQL????????? ??? Part1&2

    - by Yusuke.Yamamoto
    ????? ??:2010/10/12 ??:??????/?? ????????????????????????????????????????????SQL????????????????????????????????????????????????????SQL??????????????????????????????????????????????????SQL???????????????????????????????????????????????????!! ??????SQL???????????????/ SQL?????????????SQL????????????????SQL???????????????????:????SQL??????/ SQL?????????SQL???????????????????:?????SQL??????/ ??????·???????????????????????(???????????????·??)??? ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/OCSTuning12_10121330.wmv http://www.oracle.com/technology/global/jp/ondemand/otn-seminar/pdf/SQL.pdf

    Read the article

  • SQL Server - Get Inserted Record Identity Value when Using a View's Instead Of Trigger

    - by CuppM
    For several tables that have identity fields, we are implementing a Row Level Security scheme using Views and Instead Of triggers on those views. Here is a simplified example structure: -- Table CREATE TABLE tblItem ( ItemId int identity(1,1) primary key, Name varchar(20) ) go -- View CREATE VIEW vwItem AS SELECT * FROM tblItem -- RLS Filtering Condition go -- Instead Of Insert Trigger CREATE TRIGGER IO_vwItem_Insert ON vwItem INSTEAD OF INSERT AS BEGIN -- RLS Security Checks on inserted Table -- Insert Records Into Table INSERT INTO tblItem (Name) SELECT Name FROM inserted; END go If I want to insert a record and get its identity, before implementing the RLS Instead Of trigger, I used: DECLARE @ItemId int; INSERT INTO tblItem (Name) VALUES ('MyName'); SELECT @ItemId = SCOPE_IDENTITY(); With the trigger, SCOPE_IDENTITY() no longer works - it returns NULL. I've seen suggestions for using the OUTPUT clause to get the identity back, but I can't seem to get it to work the way I need it to. If I put the OUTPUT clause on the view insert, nothing is ever entered into it. -- Nothing is added to @ItemIds DECLARE @ItemIds TABLE (ItemId int); INSERT INTO vwItem (Name) OUTPUT INSERTED.ItemId INTO @ItemIds VALUES ('MyName'); If I put the OUTPUT clause in the trigger on the INSERT statement, the trigger returns the table (I can view it from SQL Management Studio). I can't seem to capture it in the calling code; either by using an OUTPUT clause on that call or using a SELECT * FROM (). -- Modified Instead Of Insert Trigger w/ Output CREATE TRIGGER IO_vwItem_Insert ON vwItem INSTEAD OF INSERT AS BEGIN -- RLS Security Checks on inserted Table -- Insert Records Into Table INSERT INTO tblItem (Name) OUTPUT INSERTED.ItemId SELECT Name FROM inserted; END go -- Calling Code INSERT INTO vwItem (Name) VALUES ('MyName'); The only thing I can think of is to use the IDENT_CURRENT() function. Since that doesn't operate in the current scope, there's an issue of concurrent users inserting at the same time and messing it up. If the entire operation is wrapped in a transaction, would that prevent the concurrency issue? BEGIN TRANSACTION DECLARE @ItemId int; INSERT INTO tblItem (Name) VALUES ('MyName'); SELECT @ItemId = IDENT_CURRENT('tblItem'); COMMIT TRANSACTION Does anyone have any suggestions on how to do this better? I know people out there who will read this and say "Triggers are EVIL, don't use them!" While I appreciate your convictions, please don't offer that "suggestion".

    Read the article

  • Handling inheritance with overriding efficiently

    - by Fyodor Soikin
    I have the following two data structures. First, a list of properties applied to object triples: Object1 Object2 Object3 Property Value O1 O2 O3 P1 "abc" O1 O2 O3 P2 "xyz" O1 O3 O4 P1 "123" O2 O4 O5 P1 "098" Second, an inheritance tree: O1 O2 O4 O3 O5 Or viewed as a relation: Object Parent O2 O1 O4 O2 O3 O1 O5 O3 O1 null The semantics of this being that O2 inherits properties from O1; O4 - from O2 and O1; O3 - from O1; and O5 - from O3 and O1, in that order of precedence. NOTE 1: I have an efficient way to select all children or all parents of a given object. This is currently implemented with left and right indexes, but hierarchyid could also work. This does not seem important right now. NOTE 2: I have tiggers in place that make sure that the "Object" column always contains all possible objects, even when they do not really have to be there (i.e. have no parent or children defined). This makes it possible to use inner joins rather than severely less effiecient outer joins. The objective is: Given a pair of (Property, Value), return all object triples that have that property with that value either defined explicitly or inherited from a parent. NOTE 1: An object triple (X,Y,Z) is considered a "parent" of triple (A,B,C) when it is true that either X = A or X is a parent of A, and the same is true for (Y,B) and (Z,C). NOTE 2: A property defined on a closer parent "overrides" the same property defined on a more distant parent. NOTE 3: When (A,B,C) has two parents - (X1,Y1,Z1) and (X2,Y2,Z2), then (X1,Y1,Z1) is considered a "closer" parent when: (a) X2 is a parent of X1, or (b) X2 = X1 and Y2 is a parent of Y1, or (c) X2 = X1 and Y2 = Y1 and Z2 is a parent of Z1 In other words, the "closeness" in ancestry for triples is defined based on the first components of the triples first, then on the second components, then on the third components. This rule establishes an unambigous partial order for triples in terms of ancestry. For example, given the pair of (P1, "abc"), the result set of triples will be: O1, O2, O3 -- Defined explicitly O1, O2, O5 -- Because O5 inherits from O3 O1, O4, O3 -- Because O4 inherits from O2 O1, O4, O5 -- Because O4 inherits from O2 and O5 inherits from O3 O2, O2, O3 -- Because O2 inherits from O1 O2, O2, O5 -- Because O2 inherits from O1 and O5 inherits from O3 O2, O4, O3 -- Because O2 inherits from O1 and O4 inherits from O2 O3, O2, O3 -- Because O3 inherits from O1 O3, O2, O5 -- Because O3 inherits from O1 and O5 inherits from O3 O3, O4, O3 -- Because O3 inherits from O1 and O4 inherits from O2 O3, O4, O5 -- Because O3 inherits from O1 and O4 inherits from O2 and O5 inherits from O3 O4, O2, O3 -- Because O4 inherits from O1 O4, O2, O5 -- Because O4 inherits from O1 and O5 inherits from O3 O4, O4, O3 -- Because O4 inherits from O1 and O4 inherits from O2 O5, O2, O3 -- Because O5 inherits from O1 O5, O2, O5 -- Because O5 inherits from O1 and O5 inherits from O3 O5, O4, O3 -- Because O5 inherits from O1 and O4 inherits from O2 O5, O4, O5 -- Because O5 inherits from O1 and O4 inherits from O2 and O5 inherits from O3 Note that the triple (O2, O4, O5) is absent from this list. This is because property P1 is defined explicitly for the triple (O2, O4, O5) and this prevents that triple from inheriting that property from (O1, O2, O3). Also note that the triple (O4, O4, O5) is also absent. This is because that triple inherits its value of P1="098" from (O2, O4, O5), because it is a closer parent than (O1, O2, O3). The straightforward way to do it is the following. First, for every triple that a property is defined on, select all possible child triples: select Children1.Id as O1, Children2.Id as O2, Children3.Id as O3, tp.Property, tp.Value from TriplesAndProperties tp -- Select corresponding objects of the triple inner join Objects as Objects1 on Objects1.Id = tp.O1 inner join Objects as Objects2 on Objects2.Id = tp.O2 inner join Objects as Objects3 on Objects3.Id = tp.O3 -- Then add all possible children of all those objects inner join Objects as Children1 on Objects1.Id [isparentof] Children1.Id inner join Objects as Children2 on Objects2.Id [isparentof] Children2.Id inner join Objects as Children3 on Objects3.Id [isparentof] Children3.Id But this is not the whole story: if some triple inherits the same property from several parents, this query will yield conflicting results. Therefore, second step is to select just one of those conflicting results: select * from ( select Children1.Id as O1, Children2.Id as O2, Children3.Id as O3, tp.Property, tp.Value, row_number() over( partition by Children1.Id, Children2.Id, Children3.Id, tp.Property order by Objects1.[depthInTheTree] descending, Objects2.[depthInTheTree] descending, Objects3.[depthInTheTree] descending ) as InheritancePriority from ... (see above) ) where InheritancePriority = 1 The window function row_number() over( ... ) does the following: for every unique combination of objects triple and property, it sorts all values by the ancestral distance from the triple to the parents that the value is inherited from, and then I only select the very first of the resulting list of values. A similar effect can be achieved with a GROUP BY and ORDER BY statements, but I just find the window function semantically cleaner (the execution plans they yield are identical). The point is, I need to select the closest of contributing ancestors, and for that I need to group and then sort within the group. And finally, now I can simply filter the result set by Property and Value. This scheme works. Very reliably and predictably. It has proven to be very powerful for the business task it implements. The only trouble is, it is awfuly slow. One might point out the join of seven tables might be slowing things down, but that is actually not the bottleneck. According to the actual execution plan I'm getting from the SQL Management Studio (as well as SQL Profiler), the bottleneck is the sorting. The problem is, in order to satisfy my window function, the server has to sort by Children1.Id, Children2.Id, Children3.Id, tp.Property, Parents1.[depthInTheTree] descending, Parents2.[depthInTheTree] descending, Parents3.[depthInTheTree] descending, and there can be no indexes it can use, because the values come from a cross join of several tables. EDIT: Per Michael Buen's suggestion (thank you, Michael), I have posted the whole puzzle to sqlfiddle here. One can see in the execution plan that the Sort operation accounts for 32% of the whole query, and that is going to grow with the number of total rows, because all the other operations use indexes. Usually in such cases I would use an indexed view, but not in this case, because indexed views cannot contain self-joins, of which there are six. The only way that I can think of so far is to create six copies of the Objects table and then use them for the joins, thus enabling an indexed view. Did the time come that I shall be reduced to that kind of hacks? The despair sets in.

    Read the article

  • How to analyze 'dbcc memorystatus' result in SQL Server 2008

    - by envykok
    Currently i am facing a sql memory pressure issue. i have run 'dbcc memorystatus', here is part of my result: Memory Manager KB VM Reserved 23617160 VM Committed 14818444 Locked Pages Allocated 0 Reserved Memory 1024 Reserved Memory In Use 0 Memory node Id = 0 KB VM Reserved 23613512 VM Committed 14814908 Locked Pages Allocated 0 MultiPage Allocator 387400 SinglePage Allocator 3265000 MEMORYCLERK_SQLBUFFERPOOL (node 0) KB VM Reserved 16809984 VM Committed 14184208 Locked Pages Allocated 0 SM Reserved 0 SM Committed 0 SinglePage Allocator 0 MultiPage Allocator 408 MEMORYCLERK_SQLCLR (node 0) KB VM Reserved 6311612 VM Committed 141616 Locked Pages Allocated 0 SM Reserved 0 SM Committed 0 SinglePage Allocator 1456 MultiPage Allocator 20144 CACHESTORE_SQLCP (node 0) KB VM Reserved 0 VM Committed 0 Locked Pages Allocated 0 SM Reserved 0 SM Committed 0 SinglePage Allocator 3101784 MultiPage Allocator 300328 Buffer Pool Value Committed 1742946 Target 1742946 Database 1333883 Dirty 940 In IO 1 Latched 18 Free 89 Stolen 408974 Reserved 2080 Visible 1742946 Stolen Potential 1579938 Limiting Factor 13 Last OOM Factor 0 Page Life Expectancy 5463 Process/System Counts Value Available Physical Memory 258572288 Available Virtual Memory 8771398631424 Available Paging File 16030617600 Working Set 15225597952 Percent of Committed Memory in WS 100 Page Faults 305556823 System physical memory high 1 System physical memory low 0 Process physical memory low 0 Process virtual memory low 0 Procedure Cache Value TotalProcs 11382 TotalPages 430160 InUsePages 28 Can you lead me to analyze this result ? Is it a lot execute plan have been cached causing the memory issue or other reasons?

    Read the article

  • Generating the query plan takes 5 minutes, the query itself runs in milliseconds. What's up?

    - by TheImirOfGroofunkistan
    I have a fairly complex (or ugly depending on how you look at it) stored procedure running on SQL Server 2008. It bases a lot of the logic on a view that has a pk table and a fk table. The fk table is left joined to the pk table slightly more than 30 times (the fk table has a poor design - it uses name value pairs that I need to flatten out. Unfortunately, it's 3rd party and I cannot change it). Anyway, it had been running fine for weeks until I periodically noticed a run that would take 3-5 minutes. It turns out that this is the time it takes to generate the query plan. Once the query plan exists and is cached, the stored procedure itself runs very efficiently. Things run smoothly until there is a reason to regenerate and cache the query plan again. Has anyone seen this? Why does it take so long to generate the plan? Are there ways to make it come up with a plan faster?

    Read the article

  • how to rethrow same exception in sql server

    - by Shantanu Gupta
    I want to rethrow same exception in sql server that has been occured in my try block. I am able to throw same message but i want to throw same error. BEGIN TRANSACTION BEGIN TRY INSERT INTO Tags.tblDomain (DomainName, SubDomainId, DomainCode, Description) VALUES(@DomainName, @SubDomainId, @DomainCode, @Description) COMMIT TRANSACTION END TRY BEGIN CATCH declare @severity int; declare @state int; select @severity=error_severity(), @state=error_state(); RAISERROR(@@Error,@ErrorSeverity,@state); ROLLBACK TRANSACTION END CATCH RAISERROR(@@Error, @ErrorSeverity, @state); This line will show error, but i want functionality something like that. This raises error with error number 50000, but i want erron number to be thrown that i am passing @@error, I want to capture this error no at frontend i.e. catch (SqlException ex) { if ex.number==2627 MessageBox.show("Duplicate value cannot be inserted"); } I want this functionality. which can't be achieved using raiseerror. I dont want to give custom error message at back end.

    Read the article

  • Concatenate row values T-SQL

    - by Robert
    I am trying to pull together some data for a report and need to concatenate the row values of one of the tables. Here is the basic table structure: Reviews ReviewID ReviewDate Reviewers ReviewerID ReviewID UserID Users UserID FName LName This is a M:M relationship. Each Review can have many Reviewers; each User can be associated with many Reviews. Basically, all I want to see is Reviews.ReviewID, Reviews.ReviewDate, and a concatenated string of the FName's of all the associated Users for that Review (comma delimited). Instead of: ReviewID---ReviewDate---User 1----------12/1/2009----Bob 1----------12/1/2009----Joe 1----------12/1/2009----Frank 2----------12/9/2009----Sue 2----------12/9/2009----Alice Display this: ReviewID---ReviewDate----Users 1----------12/1/2009-----Bob, Joe, Frank 2----------12/9/2009-----Sue, Alice I have found this article describing some ways to do this, but most of these seem to only deal with one table, not multiple; unfortunately, my SQL-fu is not strong enough to adapt these to my circumstances. I am particularly interested in the example on that site which utilizes FOR XML PATH() as that looks the cleanest and most straight forward. SELECT p1.CategoryId, ( SELECT ProductName + ', ' FROM Northwind.dbo.Products p2 WHERE p2.CategoryId = p1.CategoryId ORDER BY ProductName FOR XML PATH('') ) AS Products FROM Northwind.dbo.Products p1 GROUP BY CategoryId; Can anyone give me a hand with this? Any help would be greatly appreciated!

    Read the article

  • converting mysql database to sql server

    - by every_answer_gets_a_point
    i have a mysql database: /* MySQL Data Transfer Source Host: 10.0.0.5 Source Database: jnetdata Target Host: 10.0.0.5 Target Database: jnetdata Date: 5/26/2009 12:27:33 PM */ SET FOREIGN_KEY_CHECKS=0; -- ---------------------------- -- Table structure for chavrusas -- ---------------------------- CREATE TABLE `chavrusas` ( `id` int(11) NOT NULL auto_increment, `date_created` datetime default NULL, `luser_id` int(11) default NULL, `ruser_id` int(11) default NULL, `luser_type` varchar(50) default NULL, `ruser_type` varchar(50) default NULL, `SessionDay` varchar(250) default NULL, `SessionTime` datetime default NULL, `WeeklyReminder` tinyint(1) NOT NULL default '0', `reminder_phone` tinyint(1) NOT NULL default '0', `calling_card` varchar(50) default NULL, `active` tinyint(1) NOT NULL default '0', `notes` mediumtext, `ended` tinyint(1) NOT NULL default '0', `end_date` datetime default NULL, `initiated_by_student` tinyint(1) NOT NULL default '0', `initiated_by_volunteer` tinyint(1) NOT NULL default '0', `student_general_reason` varchar(50) default NULL, `volunteer_general_reason` varchar(50) default NULL, `student_reason` varchar(250) default NULL, `volunteer_reason` varchar(250) default NULL, `student_nli` tinyint(1) NOT NULL default '0', `volunteer_nli` tinyint(1) NOT NULL default '0', `jnet_initiated` tinyint(1) default '0', `belongs_to` varchar(50) default NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM AUTO_INCREMENT=5913 DEFAULT CHARSET=latin1; -- ---------------------------- -- Table structure for tbluseravailability -- ---------------------------- CREATE TABLE `tbluseravailability` ( `availability_id` int(11) NOT NULL auto_increment, `user_id` int(11) NOT NULL, `weekday_id` int(11) NOT NULL, `timeslot_id` int(11) NOT NULL, PRIMARY KEY (`availability_id`) ) ENGINE=MyISAM AUTO_INCREMENT=10865 DEFAULT CHARSET=latin1; -- ---------------------------- -- Table structure for tblusers -- ---------------------------- CREATE TABLE `tblusers` ( `id` int(11) NOT NULL auto_increment, `password` varchar(50) default NULL, `title` varchar(255) default NULL, `first` varchar(255) default NULL, `last` varchar(255) default NULL, `gender` varchar(255) default NULL, `address` varchar(255) default NULL, `address_2` varchar(255) default NULL, `city` varchar(255) default NULL, `state` varchar(255) default NULL, `postcode` varchar(255) default NULL, `country` varchar(255) default NULL, `email` varchar(255) default NULL, `emailnotes` varchar(255) default NULL, `Home_Phone` varchar(255) default NULL, `Office_Phone` varchar(255) default NULL, `Cell_Phone` varchar(255) default NULL, `Contact_Preference` varchar(255) default NULL, `Birthdate` datetime default NULL, `Age` varchar(255 and it goes on for about 10mb i need to convert it to ms sql, how do i do it?

    Read the article

  • SQL Syntax to count unique users completing a task

    - by Belliez
    I have the following code which shows me what users has completed ticket and this lists each user and the date they close a ticket. i.e. Paul Matt Matt Bob Matt Paul Matt Matt At the moment I manually count each user myself to see their totals for the day. EDIT: Changed output as columns instead of rows: What I have been trying to do is get SQL Server to do this for me i.e. the final result to look like: Paul | 2 Matt | 5 Bob | 1 My code I am currently using is and I would be greatful if someone can help me change this so I can get it outputting something similar to above? DECLARE @StartDate DateTime; DECLARE @EndDate DateTime; -- Date format: YYYY-MM-DD SET @StartDate = '2013-11-06 00:00:00' SET @EndDate = GETDATE() -- Today SELECT (select Username from Membership where UserId = Ticket.CompletedBy) as TicketStatusChangedBy FROM Ticket INNER JOIN TicketStatus ON Ticket.TicketStatusID = TicketStatus.TicketStatusID INNER JOIN Membership ON Ticket.CheckedInBy = Membership.UserId WHERE TicketStatus.TicketStatusName = 'Completed' and Ticket.ClosedDate >= @StartDate --(GETDATE() - 1) and Ticket.ClosedDate <= @EndDate --(GETDATE()-0) ORDER BY Ticket.CompletedBy ASC, Ticket.ClosedDate ASC Thank you for your help and time.

    Read the article

  • T-SQL While Loop and concatenation

    - by JustinT
    I have a SQL query that is supposed to pull out a record and concat each to a string, then output that string. The important part of the query is below. DECLARE @counter int; SET @counter = 1; DECLARE @tempID varchar(50); SET @tempID = ''; DECLARE @tempCat varchar(255); SET @tempCat = ''; DECLARE @tempCatString varchar(5000); SET @tempCatString = ''; WHILE @counter <= @tempCount BEGIN SET @tempID = ( SELECT [Val] FROM #vals WHERE [ID] = @counter); SET @tempCat = (SELECT [Description] FROM [Categories] WHERE [ID] = @tempID); print @tempCat; SET @tempCatString = @tempCatString + '<br/>' + @tempCat; SET @counter = @counter + 1; END When the script runs, @tempCatString outputs as null while @tempCat always outputs correctly. Is there some reason that concatenation won't work inside a While loop? That seems wrong, since incrementing @counter works perfectly. So is there something else I'm missing?

    Read the article

  • Status of Data in Rollback of Large Transaction in SQL Server

    - by Lloyd Banks
    I have a data warehousing procedure that downloads and replaces dozens of tables from a linked server to a local database. Every once in a while, the code will get stuck on one of the tables on the linked server because the table on the linked server is in a state of transition. I am under the assumption that since the entire procedure is considered one transaction commit, when the procedure gets stuck, none of the changes made by the procudure so far would have committed. But the opposite seems to be true, tables that were "downloaded" before the procedure got stuck would have been updated with today's versions on the local server. Shouldn't SQL Server wait for the entire procedure to finish before the changes are durable? CREATE PROCEDURE MYIMPORT AS BEGIN SET NOCOUNT ON IF EXISTS (SELECT * FROM INFORMATION.SCHEMA.TABLES WHERE TABLE_NAME = 'TABLE1') DROP TABLE TABLE1 SELECT COLUMN1, COLUMN2, COLUMN3 INTO TABLE1 FROM OPENQUERY(MYLINK, 'SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE1') IF EXISTS (SELECT * FROM INFORMATION.SCHEMA.TABLES WHERE TABLE_NAME = 'TABLE2') DROP TABLE TABLE2 SELECT COLUMN1, COLUMN2, COLUMN3 INTO TABLE2 FROM OPENQUERY(MYLINK, 'SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE2') --IF THE PROCEDURE GETS STUCK HERE, THEN CHANGES TO TABLE1 WOULD HAVE BEEN MADE ON THE LOCAL SERVER WHILE NO CHANGES WOULD HAVE BEEN MADE TO TABLE3 ON THE LOCAL SERVER IF EXISTS (SELECT * FROM INFORMATION.SCHEMA.TABLES WHERE TABLE_NAME = 'TABLE3') DROP TABLE TABLE3 SELECT COLUMN1, COLUMN2, COLUMN3 INTO TABLE3 FROM OPENQUERY(MYLINK, 'SELECT COLUMN1, COLUMN2, COLUMN3 FROM TABLE3') END

    Read the article

  • Best pattern for storing (product) attributes in SQL Server

    - by EdH
    We are starting a new project where we need to store product and many product attributes in a database. The technology stack is MS SQL 2008 and Entity Framework 4.0 / LINQ for data access. The products (and Products Table) are pretty straightforward (a SKU, manufacturer, price, etc..). However there are also many attributes to store with each product (think industrial widgets). These may range from color to certification(s) to pipe size. Every product may have different attributes, and some may have multiples of the same attribute (Ex: Certifications). The current proposal is that we will basically have a name/value pair table with a FK back to the product ID in each row. An example of the attributes Table may look like this: ProdID AttributeName AttributeValue 123 Color Blue 123 FittingSize 1.25 123 Certification AS1111 123 Certification EE2212 123 Certification FM.3 456 Pipe 11 678 Color Red 999 Certification AE1111 ... Note: Attribute name would likely come from a lookup table or enum. So the main question here is: Is this the best pattern for doing something like this? How will the performance be? Queries will be based on a JOIN of the product and attributes table, and generally need many WHEREs to filter on specific attributes - the most common search will be to find a product based on a set of known/desired attributes. If anyone has any suggestions or a better pattern for this type of data, please let me know. Thanks! -Ed

    Read the article

  • SQL Full-Text Indexing Issue

    - by Phil
    UPDATE: I have figured out a way using a form of dynamic sql to fix this problem, thanks anyway for any help. Hi, there is something that I need to accomplish with the use of Full-Text Indexing. This is it: The fact of the matter is when I run a query (with a stored procedure) that looks like (with a parameter (@name) that was obviously defined above (not shown here), this parameter is sent to the stored procedure by an asp.net page, from user input): SELECT Name FROMdbo.UsersTable WHERE FREETEXT(Name, @name) Well, the fact of the matter is that a query like this will return values if, say the parameter @name's value is Joe, and say, there are 10 records of names with Joe in them, but if @name's value is just Jo, then it returns nothing, and this is the problem. Say that there are other records in this table that have Jo in them, like for example, Jole, or John. So the real question is, how do I get it to return values that are not full words, or phrases, but just from part of the word/phrase (like I said above)? Like FREETEXT(Name, @name*), which is not allowed to be used as a query, but, you get the idea. Is there a way to accomplish this? I'm sure there must be, I need to figure this out. Thanks for any help.

    Read the article

  • LINQ to SQL - Left Outer Join with multiple join conditions

    - by dan
    I have the following SQL which I am trying to translate to LINQ: SELECT f.value FROM period as p LEFT OUTER JOIN facts AS f ON p.id = f.periodid AND f.otherid = 17 WHERE p.companyid = 100 I have seen the typical implementation of the left outer join (ie. into x from y in x.DefaultIfEmpty() etc.) but am unsure how to introduce the other join condition ('AND f.otherid = 17') EDIT Why is the 'AND f.otherid = 17' condition part of the JOIN instead of in the WHERE clause? Because f may not exist for some rows and I still want these rows to be included. If the condition is applied in the WHERE clause, after the JOIN - then I don't get the behaviour I want. Unfortunately this: from p in context.Periods join f in context.Facts on p.id equals f.periodid into fg from fgi in fg.DefaultIfEmpty() where p.companyid == 100 && fgi.otherid == 17 select f.value seems to be equivalent to this: SELECT f.value FROM period as p LEFT OUTER JOIN facts AS f ON p.id = f.periodid WHERE p.companyid = 100 && AND f.otherid = 17 which is not quite what I'm after.

    Read the article

  • Is SQL Server DRI (ON DELETE CASCADE) slow?

    - by Aaronaught
    I've been analyzing a recurring "bug report" (perf issue) in one of our systems related to a particularly slow delete operation. Long story short: It seems that the CASCADE DELETE keys were largely responsible, and I'd like to know (a) if this makes sense, and (b) why it's the case. We have a schema of, let's say, widgets, those being at the root of a large graph of related tables and related-to-related tables and so on. To be perfectly clear, deleting from this table is actively discouraged; it is the "nuclear option" and users are under no illusions to the contrary. Nevertheless, it sometimes just has to be done. The schema looks something like this: Widgets | +--- Anvils (1:1) | | | +--- AnvilTestData (1:N) | +--- WidgetHistory (1:N) | +--- WidgetHistoryDetails (1:N) Nothing too scary, really. A Widget can be different types, an Anvil is a special type, so that relationship is 1:1 (or more accurately 1:0..1). Then there's a large amount of data - perhaps thousands of rows of AnvilTestData per Anvil collected over time, dealing with hardness, corrosion, exact weight, hammer compatibility, usability issues, and impact tests with cartoon heads. Then every Widget has a long, boring history of various types of transactions - production, inventory moves, sales, defect investigations, RMAs, repairs, customer complaints, etc. There might be 10-20k details for a single widget, or none at all, depending on its age. So, unsurprisingly, there's a CASCADE DELETE relationship at every level here. If a Widget needs to be deleted, it means something's gone terribly wrong and we need to erase any records of that widget ever existing, including its history, test data, etc. Again, nuclear option. Relations are all indexed, statistics are up to date. Normal queries are fast. The system tends to hum along pretty smoothly for everything except deletes. Getting to the point here, finally, for various reasons we only allow deleting one widget at a time, so a delete statement would look like this: DELETE FROM Widgets WHERE WidgetID = @WidgetID Pretty simple, innocuous looking delete... that takes over 2 minutes to run, for a widget with no data! After slogging through execution plans I was finally able to pick out the AnvilTestData and WidgetHistoryDetails deletes as the sub-operations with the highest cost. So I experimented with turning off the CASCADE (but keeping the actual FK, just setting it to NO ACTION) and rewriting the script as something very much like the following: DECLARE @AnvilID int SELECT @AnvilID = AnvilID FROM Anvils WHERE WidgetID = @WidgetID DELETE FROM AnvilTestData WHERE AnvilID = @AnvilID DELETE FROM WidgetHistory WHERE HistoryID IN ( SELECT HistoryID FROM WidgetHistory WHERE WidgetID = @WidgetID) DELETE FROM Widgets WHERE WidgetID = @WidgetID Both of these "optimizations" resulted in significant speedups, each one shaving nearly a full minute off the execution time, so that the original 2-minute deletion now takes about 5-10 seconds - at least for new widgets, without much history or test data. Just to be absolutely clear, there is still a CASCADE from WidgetHistory to WidgetHistoryDetails, where the fanout is highest, I only removed the one originating from Widgets. Further "flattening" of the cascade relationships resulted in progressively less dramatic but still noticeable speedups, to the point where deleting a new widget was almost instantaneous once all of the cascade deletes to larger tables were removed and replaced with explicit deletes. I'm using DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test. I've disabled all triggers that might be causing further slowdowns (although those would show up in the execution plan anyway). And I'm testing against older widgets, too, and noticing a significant speedup there as well; deletes that used to take 5 minutes now take 20-40 seconds. Now I'm an ardent supporter of the "SELECT ain't broken" philosophy, but there just doesn't seem to be any logical explanation for this behaviour other than crushing, mind-boggling inefficiency of the CASCADE DELETE relationships. So, my questions are: Is this a known issue with DRI in SQL Server? (I couldn't seem to find any references to this sort of thing on Google or here in SO; I suspect the answer is no.) If not, is there another explanation for the behaviour I'm seeing? If it is a known issue, why is it an issue, and are there better workarounds I could be using?

    Read the article

  • How to check for mutual existence of Fields in same table in Two columns

    - by ranabra
    I tried using "Exist" and "IN". Not only did I not succeed, it didn't seem as an efficient solution. Here is a simplified example: TblMyTable WorkerUserName  -  WorkerDegree  -  ManagerUserName  -  ManagerDegree I need a query where there is a mutual connection / existence. What I mean is only where the worker, ex. "John" has a manager named "Mary", and where Mary the manager has a worker named "John". John the worker can have several managers and Mary the manager can have several workers. So the result will be (the order doesn't matter) ideally in one line: John - BSc  --  Mary - M.A. or Mary - M.A.  --  John - BSc The punchline is it's only one table. it is not really managers and workers, this is just a simplification of the situation. In the real situation both are equal, in a Table of names of users working with other users. Database is SQL 2005. Many thanx in advance  

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >