Search Results

Search found 29814 results on 1193 pages for 'sql datetime'.

Page 129/1193 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • Unable to add FromName to e-mail using cdosys in SQL Server 2008

    - by Alex Andronov
    I have a piece of cdosys code which runs correctly and generates e-mail with my SQL Server 2008 server talking to a MS Exchange 2003 Server. However the from name is not appearing on the e-mails when they arrive. Is there a fault in the code is it not possible this way? Thanks in advance usp_send_cdosysmail @from varchar(500), @to text, @bcc text , @subject varchar(1000), @body text , @smtpserver varchar(25), @bodytype varchar(10) as declare @imsg int declare @hr int declare @source varchar(255) declare @description varchar(500) declare @output varchar(8000) exec @hr = sp_oacreate 'cdo.message', @imsg out exec @hr = sp_oasetproperty @imsg, 'configuration.fields("http://schemas.microsoft.com/cdo/configuration/sendusing").value','2' exec @hr = sp_oasetproperty @imsg, 'configuration.fields("http://schemas.microsoft.com/cdo/configuration/smtpserver").value', @smtpserver exec @hr = sp_oamethod @imsg, 'configuration.fields.update', null exec @hr = sp_oasetproperty @imsg, 'to', @to exec @hr = sp_oasetproperty @imsg, 'bcc', @bcc exec @hr = sp_oasetproperty @imsg, 'from', @from exec @hr = sp_oasetproperty @imsg, 'fromname','A From Name' exec @hr = sp_oasetproperty @imsg, 'subject', @subject -- if you are using html e-mail, use 'htmlbody' instead of 'textbody'. exec @hr = sp_oasetproperty @imsg, @bodytype, @body exec @hr = sp_oamethod @imsg, 'send', null -- sample error handling. if @hr <>0 select @hr begin exec @hr = sp_oageterrorinfo null, @source out, @description out if @hr = 0 begin select @output = ' source: ' + @source print @output select @output = ' description: ' + @description print @output end else begin print ' sp_oageterrorinfo failed.' return end end exec @hr = sp_oadestroy @imsg

    Read the article

  • In SQL can I return a tables with a varying number of columns

    - by Matt
    I have a somewhat more complicated scenario, but I think it should be possible. I have a large SPROC whose result is a set of characteristics for a set of persons. So the Table would look something like this: Property | Client1 Client 2 Client3 ----------------------------------------------------------- Sex | M F M Age | 67 56 67 Income | Low Mid Low It's built using cursors, iterating over different datasets. The problem I am facing is that there is a varying number of Clients and Properties, so an equally valid result over different input sets might be: Property | Client1 Client 2 ------------------------------------------- Sex | M F Age | 67 56 Weight | 122 122 The different number of properties is easy, those are just extra rows. My problem is that I need to declare a temporary table with a varying number of columns. There could be 2 clients or 100. Every client in guaranteed to have every property ultimately listed. What SQL structure would statisfy this and how can I declare it and insert things into it? I can't just flip the columns and rows either because there is a variable number of each.

    Read the article

  • SQL Server 2008 - Full Text Query

    - by user208662
    Hello, I have two tables in a SQL Server 2008 database in my company. The first table represents the products that my company sells. The second table contains the product manufacturer’s details. These tables are defined as follows: Product ------- ID Name ManufacturerID Description Manufacturer ------------ ID Name As you can imagine, I want to make this as easy as possible for our customers to query this data. However, I’m having problems writing a forgiving, yet powerful search query. For instance, I’m anticipating people to search based on phonetical spellings. Because of this, the data may not match the exact data in my database. In addition, I think some individuals will search by manufacturer’s name first, but I want the matching product names to appear first. Based on these requirements, I’m currently working on the following query: select p.Name as 'ProductName', m.Name as 'Manufacturer', r.Rank as 'Rank' from Product p inner join Manufacturer m on p.ManufacturerID=m.ID inner join CONTAINSTABLE(Product, Name, @searchQuery) as r Oddly, this query is throwing an error. However, I have no idea why. Squiggles appear to the right of the last parenthesis in management studio. The tool tip says "An expression of non-boolean type specified in a context where a condition is expected". I understand what this statement means. However, I guess I do not know how COntainsTable works. What am I doing wrong? Thank you

    Read the article

  • Fixing an SQL where statement that is ugly and confusing

    - by Mike Wills
    I am directly querying the back-end MS SQL Server for a software package. The key field (vehicle number) is defined as alpha though we are entering numeric value in the field. There is only one exception to this, we place an "R" before the number when the vehicle is being retired (which means we sold it or the vehicle is junked). Assuming the users do this right, we should not run into a problem using this method. (Right or wrong isn't the issue here) Fast forward to now. I am trying to query a subset of these vehicle numbers (800 - 899) for some special processing. By doing a range of '800' to '899' we also get 80, 81, etc. If I cast the vehicle number into an INT, I should be able to get the right range. Except that these "R" vehicles are kicking me in the butt now. I have tried where vehicleId not like 'R%' and cast(vehicleId as int) between 800 and 899 however, I get a casting error on one of these "R" vehicles. What does work is where vehicleId not between '800' and '899' and cast(vehicleId as int) between 800 and 899', but I feel there has to be a better way and less confusing way. I have also tried other variations with HAVING and a sub-query all producing a casting error.

    Read the article

  • Indexing table with duplicates MySQL/SQL Server with millions of records

    - by Tesnep
    I need help in indexing in MySQL. I have a table in MySQL with following rows: ID Store_ID Feature_ID Order_ID Viewed_Date Deal_ID IsTrial The ID is auto generated. Store_ID goes from 1 - 8. Feature_ID from 1 - let's say 100. Viewed Date is Date and time on which the data is inserted. IsTrial is either 0 or 1. You can ignore Order_ID and Deal_ID from this discussion. There are millions of data in the table and we have a reporting backend that needs to view the number of views in a certain period or overall where trial is 0 for a particular store id and for a particular feature. The query takes the form of: select count(viewed_date) from theTable where viewed_date between '2009-12-01' and '2010-12-31' and store_id = '2' and feature_id = '12' and Istrial = 0 In SQL Server you can have a filtered index to use for Istrial. Is there anything similar to this in MySQL? Also, Store_ID and Feature_ID have a lot of duplicate data. I created an index using Store_ID and Feature_ID. Although this seems to have decreased the search period, I need better improvement than this. Right now I have more than 4 million rows. To search for a particular query like the one above, it looks at 3.5 million rows in order to give me the count of 500k rows. PS. I forgot to add view_date filter in the query. Now I have done this.

    Read the article

  • What is preferred method for searching table data using stored procedure?

    - by Mourya
    I have a customer table with Cust_Id, Name, City and search is based upon any or all of the above three. Which one Should I go for ? Dynamic SQL: declare @str varchar(1000) set @str = 'Select [Sno],[Cust_Id],[Name],[City],[Country],[State] from Customer where 1 = 1' if (@Cust_Id != '') set @str = @str + ' and Cust_Id = ''' + @Cust_Id + '''' if (@Name != '') set @str = @str + ' and Name like ''' + @Name + '%''' if (@City != '') set @str = @str + ' and City like ''' + @City + '%''' exec (@str) Simple query: select [Sno],[Cust_Id],[Name],[City],[Country],[State] from Customer where (@Cust_Id = '' or Cust_Id = @Cust_Id) and (@Name = '' or Name like @Name + '%') and (@City = '' or City like @City + '%') Which one should I prefer (1 or 2) and what are advantages? After going through everyone's suggestion , here is what i finally got. DECLARE @str NVARCHAR(1000) DECLARE @ParametersDefinition NVARCHAR(500) SET @ParametersDefinition = N'@InnerCust_Id varchar(10), @InnerName varchar(30),@InnerCity varchar(30)' SET @str = 'Select [Sno],[Cust_Id],[Name],[City],[Country],[State] from Customer where 1 = 1' IF(@Cust_Id != '') SET @str = @str + ' and Cust_Id = @InnerCust_Id' IF(@Name != '') SET @str = @str + ' and Name like @InnerName' IF(@City != '') SET @str = @str + ' and City like @InnerCity' -- ADD the % symbol for search based upon the LIKE keyword SELECT @Name = @Name + '%', @City = @City+ '%' EXEC sp_executesql @str, @ParametersDefinition, @InnerCust_Id = @Cust_Id, @InnerName = @Name, @InnerCity = @City; References : http://blogs.lessthandot.com/index.php/DataMgmt/DataDesign/changing-exec-to-sp_executesql-doesn-t-p http://msdn.microsoft.com/en-us/library/ms175170.aspx

    Read the article

  • SQL Server: Why use shorter VARCHAR(n) fields?

    - by chryss
    It is frequently advised to choose database field sizes to be as narrow as possible. I am wondering to what degree this applies to SQL Server 2005 VARCHAR columns: Storing 10-letter English words in a VARCHAR(255) field will not take up more storage than in a VARCHAR(10) field. Are there other reasons to restrict the size of VARCHAR fields to stick as closely as possible to the size of the data? I'm thinking of Performance: Is there an advantage to using a smaller n when selecting, filtering and sorting on the data? Memory, including on the application side (C++)? Style/validation: How important do you consider restricting colunm size to force non-sensical data imports to fail (such as 200-character surnames)? Anything else? Background: I help data integrators with the design of data flows into a database-backed system. They have to use an API that restricts their choice of data types. For character data, only VARCHAR(n) with n <= 255 is available; CHAR, NCHAR, NVARCHAR and TEXT are not. We're trying to lay down some "good practices" rules, and the question has come up if there is a real detriment to using VARCHAR(255) even for data where real maximum sizes will never exceed 30 bytes or so. Typical data volumes for one table are 1-10 Mio records with up to 150 attributes. Query performance (SELECT, with frequently extensive WHERE clauses) and application-side retrieval performance are paramount.

    Read the article

  • Please explain this delete top 100 SQL syntax

    - by Patrick
    Basically I want to do this: delete top( 100 ) from table order by id asc but MS SQL doesn't allow order in this position The common solution seems to be this: DELETE table WHERE id IN(SELECT TOP (100) id FROM table ORDER BY id asc) But I also found this method here: delete table from (select top (100) * from table order by id asc) table which has a much better estimated execution plan (74:26). Unfortunately I don't really understand the syntax, please can some one explain it to me? Always interested in any other methods to achieve the same result as well. EDIT: I'm still not getting it I'm afraid, I want to be able to read the query as I read the first two which are practically English. The above queries to me are: delete the top 100 records from table, with the records ordered by id ascending delete the top 100 records from table where id is anyone of (this lot of ids) delete table from (this lot of records) table I can't change the third one into a logical English sentence... I guess what I'm trying to get at is how does this turn into "delete from table (this lot of records)". The 'from' seems to be in an illogical position and the second mention of 'table' is logically superfluous (to me).

    Read the article

  • SQL ADO.NET shortcut extensions (old school!)

    - by Jeff
    As much as I love me some ORM's (I've used LINQ to SQL quite a bit, and for the MSDN/TechNet Profile and Forums we're using NHibernate more and more), there are times when it's appropriate, and in some ways more simple, to just throw up so old school ADO.NET connections, commands, readers and such. It still feels like a pain though to new up all the stuff, make sure it's closed, blah blah blah. It's pretty much the least favorite task of writing data access code. To minimize the pain, I have a set of extension methods that I like to use that drastically reduce the code you have to write. Here they are... public static void Using(this SqlConnection connection, Action<SqlConnection> action) {     connection.Open();     action(connection);     connection.Close(); } public static SqlCommand Command(this SqlConnection connection, string sql){    var command = new SqlCommand(sql, connection);    return command;}public static SqlCommand AddParameter(this SqlCommand command, string parameterName, object value){    command.Parameters.AddWithValue(parameterName, value);    return command;}public static object ExecuteAndReturnIdentity(this SqlCommand command){    if (command.Connection == null)        throw new Exception("SqlCommand has no connection.");    command.ExecuteNonQuery();    command.Parameters.Clear();    command.CommandText = "SELECT @@IDENTITY";    var result = command.ExecuteScalar();    return result;}public static SqlDataReader ReadOne(this SqlDataReader reader, Action<SqlDataReader> action){    if (reader.Read())        action(reader);    reader.Close();    return reader;}public static SqlDataReader ReadAll(this SqlDataReader reader, Action<SqlDataReader> action){    while (reader.Read())        action(reader);    reader.Close();    return reader;} It has been awhile since I've really revisited these, so you will likely find opportunity for further optimization. The bottom line here is that you can chain together a bunch of these methods to make a much more concise database call, in terms of the code on your screen, anyway. Here are some examples: public Dictionary<string, string> Get(){    var dictionary = new Dictionary<string, string>();    _sqlHelper.GetConnection().Using(connection =>        connection.Command("SELECT Setting, [Value] FROM Settings")            .ExecuteReader()            .ReadAll(r => dictionary.Add(r.GetString(0), r.GetString(1))));    return dictionary;} or... public void ChangeName(User user, string newName){    _sqlHelper.GetConnection().Using(connection =>         connection.Command("UPDATE Users SET Name = @Name WHERE UserID = @UserID")            .AddParameter("@Name", newName)            .AddParameter("@UserID", user.UserID)            .ExecuteNonQuery());} The _sqlHelper.GetConnection() is just some other code that gets a connection object for you. You might have an even cleaner way to take that step out entirely. This looks more fluent, and the real magic sauce for me is the reader bits where you can put any kind of arbitrary method in there to iterate over the results.

    Read the article

  • Setting Timeouts: SQL Server 2008/IIS 7.5

    - by Julie
    We have recently migrated from a Win 2003/SQL Server 2000 system to Win 2008 64 bit R2, SQL Server 2008 R2. Our websites are in classic asp, and this can't be changed to another scripting language at this time. On the old server, if I got stuck in some kind of endless loop, the page would throw an error. On the new server, I have a page that has some sort of looping problem, that even though the SQL SP is called only once (and runs fine run as a query on the server) it pegs SQL server and therefore locks all of our websites. I'll get my code figured out, no biggie. But I need to make sure the server times out when this happens. (The page I'm working on runs fine with certain instances of the query, and locks with others using a different query variable. I can't have something like that sneak up on me on a page I haven't touched for three years.) I can't figure out how an SP that runs once on the server, from an ASP page, is tying up SQL server this way. It's obviously some sort of a timeout issue, but I can't figure out where/which timeout values to change. I actually have to remote desktop to the server and kill the process in SQL server. I'm afraid I'm a generalist, and server management is not my thing, even though it's my responsibility, so I am almost certain to have questions about any answer that I receive. How can I track this down? What settings do I need to change? More info: It's not SQL Server On our test site, I created an ASP file that just did an endless loop (do while 1=1) and had the same problem - the other websites wouldn't load - without SQL server being involved. So I think the reason the process was hanging is that the page wasn't timing out as it should, and so the connection to SQL was never closed. Killing the process in SQL server would reset the page somehow. For my intentional endless loop, I had to refresh the app pool to get rid of it. This points more to either IIS or the ASP settings. The ASP timeouts are set to whatever the default were when the server was first loaded. I still can't figure out why one file is locking up all websites, though. Again, that didn't happen on the old server.

    Read the article

  • Re-starting SQL Server and OS restarting

    - by rem
    Some manipulation with SQL Server require re-starting SQL Server after that. If we restart operation system, does this always mean that we restart SQL Server also (it seems evident that it is so, but just in case I ask to be sure)? Or there could be situation when we should do it explicitely for example by context menu choosing "Restart" in SQL Server Configuration Manager? I.e. could it be necessary for something to restart SQL Server while OS is working?

    Read the article

  • Problem in SQL Server 2005

    - by megala
    I Know how to create table in SQL server 2005.But due to system problem i reinstalled SQL SErver 2005.After that I select the option like that start - programs - microsoft sql server 2005 - sql server management studio express My problem is in that sql server management studio express is not exits.How to solve the above problem Thanks in advance

    Read the article

  • Determine if application is re-using SQL Connection

    - by Steve Evans
    I have a legacy app that connects to my SQL 2008 server. I'm trying to determine if the application is re-using it's connection to the SQL server or is creating new connections on a regular basis. Using SQL Profiler I've audited for login events, but that appears to generate an event every time a SQL statement is executed even with apps that I know are maintaining their connection to SQL.

    Read the article

  • Need to reformat SQL Cluster Disk. How do I recover my SQL installation?

    - by I.T. Support
    We need to reformat the SQL cluster disk in our SQL cluster. The drive contains the shared installation files for SQL as well as databases. My concern is how SQL/The Cluster will react to after we wipe the disk resource. Questions: Is there a defined procedure for this? How should we backup and restore the disk? After the reformat, how do we get the clustered SQL server back online? Thanks

    Read the article

  • SQL Server 2005 - query with case statement

    - by user329266
    Trying to put a single query together to be used eventually in a SQL Server 2005 report. I need to: Pull in all distinct records for values in the "eventid" column for a time frame - this seems to work. For each eventid referenced above, I need to search for all instances of the same eventid to see if there is another record with TaskName like 'review1%'. Again, this seems to work. This is where things get complicated: For each record where TaskName is like review1, I need to see if another record exists with the same eventid and where TaskName='End'. Utimately, I need a count of how many records have TaskName like 'review1%', and then how many have TaskName like 'review1%' AND TaskName='End'. I would think this could be accomplished by setting a new value for each record, and for the eventid, if a record exists with TaskName='End', set to 1, and if not, set to 0. The query below seems to accomplish item #1 above: SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000'))) AS T WHERE seq = 1 order by eventid And the query below seems to accomplish #2: SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000')) and TaskName like 'Review1%') AS T WHERE seq = 1 order by eventid This will bring back the eventid's that also have a TaskName='End': SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000')) and TaskName like 'Review1%') AS T WHERE seq = 1 and eventid in (Select eventid from eventrecords where TaskName = 'End') order by eventid So I've tried the following to TRY to accomplish #3: SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000')) and TaskName like 'Review1%') AS T WHERE seq = 1 and case when (eventid in (Select eventid from eventrecords where TaskName = 'End') then 1 else 0) as bit end order by eventid When I try to run this, I get: "Incorrect syntax near the keyword 'then'." Not sure what I'm doing wrong. Haven't seen any examples anywhere quite like this. I should mention that eventrecords has a primary key, but it doesn't seem to help anything when I include it, and I am not permitted to change the table. (ugh) I've received one suggestion to use a cursor and temporary table, but am not sure how badley that would bog down performance when the report is running. Thanks in advance.

    Read the article

  • Date format error in vb.net ?

    - by ahmed
    I get this error when I run the application Incorrect syntax near 12, on debugging I found that this error is caused due to the # along with the date. Dim backdate as datetime backdate = DateTime.Now.AddDays(-1) on binding the data to the grid to filter the backdate records this error is caused Incorrect syntax near 12. myqry = " select SRNO,SUBJECT,ID where datesend =" backdate Now i am trying to extract only the date or shall I divide the date into day , month and year with DATEPART and take into a variable or convert the date or what should i do , Please help ???

    Read the article

  • Convert historic dates from utc to local time

    - by Espo
    I have an sql table with data like this: | theDate (datetime) | theValue (int) | ---------------------------------------- | 2010-05-17 02:21:10 | 5 | | 2009-03-12 04:11:35 | 23 | | 2010-02-19 18:16:53 | 52 | | 2008-07-07 22:54:11 | 30 | The dates are stored in UTC format in a datetime column, how can I convert them to local time (Norway)? Remember that the UTC-offset is not the same all year because of winter/summer-time.

    Read the article

  • SQL Server - Get Inserted Record Identity Value when Using a View's Instead Of Trigger

    - by CuppM
    For several tables that have identity fields, we are implementing a Row Level Security scheme using Views and Instead Of triggers on those views. Here is a simplified example structure: -- Table CREATE TABLE tblItem ( ItemId int identity(1,1) primary key, Name varchar(20) ) go -- View CREATE VIEW vwItem AS SELECT * FROM tblItem -- RLS Filtering Condition go -- Instead Of Insert Trigger CREATE TRIGGER IO_vwItem_Insert ON vwItem INSTEAD OF INSERT AS BEGIN -- RLS Security Checks on inserted Table -- Insert Records Into Table INSERT INTO tblItem (Name) SELECT Name FROM inserted; END go If I want to insert a record and get its identity, before implementing the RLS Instead Of trigger, I used: DECLARE @ItemId int; INSERT INTO tblItem (Name) VALUES ('MyName'); SELECT @ItemId = SCOPE_IDENTITY(); With the trigger, SCOPE_IDENTITY() no longer works - it returns NULL. I've seen suggestions for using the OUTPUT clause to get the identity back, but I can't seem to get it to work the way I need it to. If I put the OUTPUT clause on the view insert, nothing is ever entered into it. -- Nothing is added to @ItemIds DECLARE @ItemIds TABLE (ItemId int); INSERT INTO vwItem (Name) OUTPUT INSERTED.ItemId INTO @ItemIds VALUES ('MyName'); If I put the OUTPUT clause in the trigger on the INSERT statement, the trigger returns the table (I can view it from SQL Management Studio). I can't seem to capture it in the calling code; either by using an OUTPUT clause on that call or using a SELECT * FROM (). -- Modified Instead Of Insert Trigger w/ Output CREATE TRIGGER IO_vwItem_Insert ON vwItem INSTEAD OF INSERT AS BEGIN -- RLS Security Checks on inserted Table -- Insert Records Into Table INSERT INTO tblItem (Name) OUTPUT INSERTED.ItemId SELECT Name FROM inserted; END go -- Calling Code INSERT INTO vwItem (Name) VALUES ('MyName'); The only thing I can think of is to use the IDENT_CURRENT() function. Since that doesn't operate in the current scope, there's an issue of concurrent users inserting at the same time and messing it up. If the entire operation is wrapped in a transaction, would that prevent the concurrency issue? BEGIN TRANSACTION DECLARE @ItemId int; INSERT INTO tblItem (Name) VALUES ('MyName'); SELECT @ItemId = IDENT_CURRENT('tblItem'); COMMIT TRANSACTION Does anyone have any suggestions on how to do this better? I know people out there who will read this and say "Triggers are EVIL, don't use them!" While I appreciate your convictions, please don't offer that "suggestion".

    Read the article

  • SQL Server Date Comparison Functions

    - by HighAltitudeCoder
    A few months ago, I found myself working with a repetitive cursor that looped until the data had been manipulated enough times that it was finally correct.  The cursor was heavily dependent upon dates, every time requiring the earlier of two (or several) dates in one stored procedure, while requiring the later of two dates in another stored procedure. In short what I needed was a function that would allow me to perform the following evaluation: WHERE MAX(Date1, Date2) < @SomeDate The problem is, the MAX() function in SQL Server does not perform this functionality.  So, I set out to put these functions together.  They are titled: EarlierOf() and LaterOf(). /**********************************************************                               EarlierOf.sql   **********************************************************/ /**********************************************************   Return the later of two DATETIME variables.   Parameter 1: DATETIME1 Parameter 2: DATETIME2   Works for a variety of DATETIME or NULL values. Even though comparisons with NULL are actually indeterminate, we know conceptually that NULL is not earlier or later than any other date provided.   SYNTAX: SELECT dbo.EarlierOf('1/1/2000','12/1/2009') SELECT dbo.EarlierOf('2009-12-01 00:00:00.000','2009-12-01 00:00:00.521') SELECT dbo.EarlierOf('11/15/2000',NULL) SELECT dbo.EarlierOf(NULL,'1/15/2004') SELECT dbo.EarlierOf(NULL,NULL)   **********************************************************/ USE AdventureWorks GO   IF EXISTS       (SELECT *       FROM sysobjects       WHERE name = 'EarlierOf'       AND xtype = 'FN'       ) BEGIN             DROP FUNCTION EarlierOf END GO   CREATE FUNCTION EarlierOf (       @Date1                              DATETIME,       @Date2                              DATETIME )   RETURNS DATETIME   AS BEGIN       DECLARE @ReturnDate     DATETIME         IF (@Date1 IS NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = NULL             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NULL AND @Date2 IS NOT NULL)       BEGIN             SET @ReturnDate = @Date2             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NOT NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = @Date1             GOTO EndOfFunction       END         ELSE       BEGIN             SET @ReturnDate = @Date1             IF @Date2 < @Date1                   SET @ReturnDate = @Date2             GOTO EndOfFunction       END         EndOfFunction:       RETURN @ReturnDate   END -- End Function GO   ---- Set Permissions --GRANT SELECT ON EarlierOf TO UserRole1 --GRANT SELECT ON EarlierOf TO UserRole2 --GO                                                                                             The inverse of this function is only slightly different. /**********************************************************                               LaterOf.sql   **********************************************************/ /**********************************************************   Return the later of two DATETIME variables.   Parameter 1: DATETIME1 Parameter 2: DATETIME2   Works for a variety of DATETIME or NULL values. Even though comparisons with NULL are actually indeterminate, we know conceptually that NULL is not earlier or later than any other date provided.   SYNTAX: SELECT dbo.LaterOf('1/1/2000','12/1/2009') SELECT dbo.LaterOf('2009-12-01 00:00:00.000','2009-12-01 00:00:00.521') SELECT dbo.LaterOf('11/15/2000',NULL) SELECT dbo.LaterOf(NULL,'1/15/2004') SELECT dbo.LaterOf(NULL,NULL)   **********************************************************/ USE AdventureWorks GO   IF EXISTS       (SELECT *       FROM sysobjects       WHERE name = 'LaterOf'       AND xtype = 'FN'       ) BEGIN             DROP FUNCTION LaterOf END GO   CREATE FUNCTION LaterOf (       @Date1                              DATETIME,       @Date2                              DATETIME )   RETURNS DATETIME   AS BEGIN       DECLARE @ReturnDate     DATETIME         IF (@Date1 IS NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = NULL             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NULL AND @Date2 IS NOT NULL)       BEGIN             SET @ReturnDate = @Date2             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NOT NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = @Date1             GOTO EndOfFunction       END         ELSE       BEGIN             SET @ReturnDate = @Date1             IF @Date2 > @Date1                   SET @ReturnDate = @Date2             GOTO EndOfFunction       END         EndOfFunction:       RETURN @ReturnDate   END -- End Function GO   ---- Set Permissions --GRANT SELECT ON LaterOf TO UserRole1 --GRANT SELECT ON LaterOf TO UserRole2 --GO                                                                                             The interesting thing about this function is its simplicity and the built-in NULL handling functionality.  Its interesting, because it seems like something should already exist in SQL Server that does this.  From a different vantage point, if you create this functionality and it is easy to use (ideally, intuitively self-explanatory), you have made a successful contribution. Interesting is good.  Self-explanatory, or intuitive is FAR better.  Happy coding! Graeme

    Read the article

  • Is SQL Server DRI (ON DELETE CASCADE) slow?

    - by Aaronaught
    I've been analyzing a recurring "bug report" (perf issue) in one of our systems related to a particularly slow delete operation. Long story short: It seems that the CASCADE DELETE keys were largely responsible, and I'd like to know (a) if this makes sense, and (b) why it's the case. We have a schema of, let's say, widgets, those being at the root of a large graph of related tables and related-to-related tables and so on. To be perfectly clear, deleting from this table is actively discouraged; it is the "nuclear option" and users are under no illusions to the contrary. Nevertheless, it sometimes just has to be done. The schema looks something like this: Widgets | +--- Anvils (1:1) | | | +--- AnvilTestData (1:N) | +--- WidgetHistory (1:N) | +--- WidgetHistoryDetails (1:N) Nothing too scary, really. A Widget can be different types, an Anvil is a special type, so that relationship is 1:1 (or more accurately 1:0..1). Then there's a large amount of data - perhaps thousands of rows of AnvilTestData per Anvil collected over time, dealing with hardness, corrosion, exact weight, hammer compatibility, usability issues, and impact tests with cartoon heads. Then every Widget has a long, boring history of various types of transactions - production, inventory moves, sales, defect investigations, RMAs, repairs, customer complaints, etc. There might be 10-20k details for a single widget, or none at all, depending on its age. So, unsurprisingly, there's a CASCADE DELETE relationship at every level here. If a Widget needs to be deleted, it means something's gone terribly wrong and we need to erase any records of that widget ever existing, including its history, test data, etc. Again, nuclear option. Relations are all indexed, statistics are up to date. Normal queries are fast. The system tends to hum along pretty smoothly for everything except deletes. Getting to the point here, finally, for various reasons we only allow deleting one widget at a time, so a delete statement would look like this: DELETE FROM Widgets WHERE WidgetID = @WidgetID Pretty simple, innocuous looking delete... that takes over 2 minutes to run, for a widget with no data! After slogging through execution plans I was finally able to pick out the AnvilTestData and WidgetHistoryDetails deletes as the sub-operations with the highest cost. So I experimented with turning off the CASCADE (but keeping the actual FK, just setting it to NO ACTION) and rewriting the script as something very much like the following: DECLARE @AnvilID int SELECT @AnvilID = AnvilID FROM Anvils WHERE WidgetID = @WidgetID DELETE FROM AnvilTestData WHERE AnvilID = @AnvilID DELETE FROM WidgetHistory WHERE HistoryID IN ( SELECT HistoryID FROM WidgetHistory WHERE WidgetID = @WidgetID) DELETE FROM Widgets WHERE WidgetID = @WidgetID Both of these "optimizations" resulted in significant speedups, each one shaving nearly a full minute off the execution time, so that the original 2-minute deletion now takes about 5-10 seconds - at least for new widgets, without much history or test data. Just to be absolutely clear, there is still a CASCADE from WidgetHistory to WidgetHistoryDetails, where the fanout is highest, I only removed the one originating from Widgets. Further "flattening" of the cascade relationships resulted in progressively less dramatic but still noticeable speedups, to the point where deleting a new widget was almost instantaneous once all of the cascade deletes to larger tables were removed and replaced with explicit deletes. I'm using DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test. I've disabled all triggers that might be causing further slowdowns (although those would show up in the execution plan anyway). And I'm testing against older widgets, too, and noticing a significant speedup there as well; deletes that used to take 5 minutes now take 20-40 seconds. Now I'm an ardent supporter of the "SELECT ain't broken" philosophy, but there just doesn't seem to be any logical explanation for this behaviour other than crushing, mind-boggling inefficiency of the CASCADE DELETE relationships. So, my questions are: Is this a known issue with DRI in SQL Server? (I couldn't seem to find any references to this sort of thing on Google or here in SO; I suspect the answer is no.) If not, is there another explanation for the behaviour I'm seeing? If it is a known issue, why is it an issue, and are there better workarounds I could be using?

    Read the article

  • Get mutually and non mutually existening Fields in same table in Two columns

    - by ranabra
    This is a question similar to another question I posted here but is a little different. I am trying to get a list of all instances of mutual and non-mutual existing Users. What I mean is that the returned result from the query will return a list of users along with their co-worker. It is similar to the question here, but the difference is that non mutual users will be returned too and with out the "duplicity" mutually existing users return in the list (See image below in-order simplify it all). I took the original answer from Thomas (Thanx again Thomas) Select D1.u_username, U1.Permission, U1.Grade, D1.f_username, U2.Permission, U2.Gradefrom tblDynamicUserList As D1    Join tblDynamicUserList As D2        On D2.u_username = D1.f_username            And D2.f_username = D1.u_username    Join tblUsers As U1        On U1.u_username = D1.u_username    Join tblUsers As U2        On U2.u_username = D2.u_username and after some several trials I commented out 2 lines (Below). The returned result are exactly as described in the beginning of this question, but with the "duplicity" returned by mutually existing users in the table. How can I eliminate this duplicity? Select D1.u_username, U1.Permission, U1.Grade, D1.f_username, U2.Permission, U2.Gradefrom tblDynamicUserList As D1    Join tblDynamicUserList As D2        On D2.u_username = D1.f_username            /* And D2.f_username = D1.u_username */    Join tblUsers As U1        On U1.u_username = D1.u_username    Join tblUsers As U2        On U2.u_username = D2.u_username /* WHERE D1.U_userName < D1.f_username */ *Screenshot that hopefully helps explain it all. Database is SQL 2005. Many thanx in advance

    Read the article

  • SQL Server 2008 takes up a lot of memory?

    - by Ahmed Said
    I am conducting stress tests on my database, which is hosted on SQL Server 2008 64-bit running on a 64-bit machine with 10 GB of RAM. I have 400 threads. Each thread queries the database every second, but the query time does not take time, as the SQL profiler says that, but after 18 hours SQL Server uses up 7.2 GB of RAM and 7.2 GB of virtual memory. Is this normal behavior? How can I adjust SQL Server to clean up unused memory?

    Read the article

  • Are parametrized calls/sanitization/escaping characters necessary for hashed password fields in SQL queries?

    - by Computerish
    When writing a login system for a website, it is standard to use some combination of parameterized calls, sanitizing the user input, and/or escaping special characters to prevent SQL injection attacks. Any good login system, however, should also hash (and possibly salt) every password before it goes into an SQL query, so is it still necessary to worry about SQL injection attacks in passwords? Doesn't a hash completely eliminate any possibility of an SQL injection attack on its own?

    Read the article

  • How to convert a datetime value into a varchar with MM/dd/yyyy HH:MM:SS AM/PM format?

    - by Jyina
    I need to convert the below date into the output as shown. I can get the date part using the code 101 but for the time I could not find any code that translates the time to HH:MM:SS AM/PM? Any ideas please? Thank you! declare @adddate datetime Set @adddate = 2011-07-06T22:30:07.5205649-04:00 Convert(varchar, @adddate, 101) + ' ' + Convert(varchar, @adddate, 108) The output should be 06/07/2011 10:30:07 PM

    Read the article

  • Tellago announces SQL Server 2008 R2 BI quick adoption programs

    - by gsusx
    During the last year, we (Tellago) have been involved in various business intelligence initiatives that leverage some emerging BI techniques such as self-service BI or complex event processing (CEP). Specifically, in the last few months, we have partnered with Microsoft to deliver a series of events across the country where we present the different technologies of the SQL Server 2008 R2 BI stack such as PowerPivot, StreamInsight, Ad-Hoc Reporting and Master Data Services. As part of those events...(read more)

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >