Search Results

Search found 1369 results on 55 pages for 'clause'.

Page 13/55 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Silverlight RIA Services. Query fails on large table but works with Where clause

    - by FauxReal
    I have a somewhat large table, maybe 2000 rows by 50 columns. When using the most basic imaginable RIA implementation. Create one-table Model Create DomainService Drop datagrid onto MainPage.xaml Drop datasource onto datagrid Ctrl-F5 I get this error: System.ServiceModel.DomainServices.Client.DomainOperationException: Load operation faild for query. Value cannot be null. Error is much larger, but thats the beginning of it. The weird thing is that if I narrow the results down with a where clause on the GetQuery, it works fine. In fact six different querys which together result in all of the rows being called works fine also. So basically, I'm sure its not some sort of rogue row. Why do I get this "Value cannot be null" error if I query the whole table? Thanks

    Read the article

  • linq where clause and count result in null exception.

    - by nestling
    The code below works unless p.School.SchoolName turns out to be null, in which case it results in a NullReferenceException. if (ExistingUsers.Where(p => p.StudentID == item.StaffID && p.School.SchoolName == item.SchoolID).Count() > 0) { // Do stuff. } ExistingUsers is a list of users: public List<User> ExistingUsers; Here is the relevant portion of the stacktrace: System.NullReferenceException: Object reference not set to an instance of an object. at System.Linq.Enumerable.WhereListIterator1.MoveNext() at System.Linq.Enumerable.Count[TSource](IEnumerable1 source) How should I handle this where clause? Thanks very much in advance.

    Read the article

  • Which SQL query is faster? Filter on Join criteria or Where clause?

    - by Jon Erickson
    Compare these 2 queries. Is it faster to put the filter on the join criteria or in the were clause. I have always felt that it is faster on the join criteria because it reduces the result set at the soonest possible moment, but I don't know for sure. I'm going to build some tests to see, but I also wanted to get opinions on which would is clearer to read as well. Query 1 SELECT * FROM TableA a INNER JOIN TableXRef x ON a.ID = x.TableAID INNER JOIN TableB b ON x.TableBID = b.ID WHERE a.ID = 1 /* <-- Filter here? */ Query 2 SELECT * FROM TableA a INNER JOIN TableXRef x ON a.ID = x.TableAID AND a.ID = 1 /* <-- Or filter here? */ INNER JOIN TableB b ON x.TableBID = b.ID

    Read the article

  • How would I order a table by the number of matching params in the where clause of an sql statement?

    - by Eitan
    I'm writing sql to search a database by a number of parameters. How would I go about ordering the result set by the items that match the most parameters in the where clause. For example: SELECT * FROM users WHERE username = 'eitan' OR email = '[email protected]' OR company = 'eitan' Username | email | company 1) eitan | [email protected] | blah 2) eitan | [email protected] | eitan 3) eitan | [email protected] | blah should be ordered like: 2, 3, 1. Thanks. (ps the query isn't that easy, has a lot of joins and a lot of OR's in the WHERE) Eitan

    Read the article

  • Why do I need to explicitly specify all columns in a SQL "GROUP BY" clause - why not "GROUP BY *"?

    - by rwmnau
    This has always bothered me - why does the GROUP BY clause in a SQL statement require that I include all non-aggregate columns? These columns should be included by default - a kind of "GROUP BY *" - since I can't even run the query unless they're all included. Every column has to either be an aggregate or be specified in the "GROUP BY", but it seems like anything not aggregated should be automatically grouped. Maybe it's part of the ANSI-SQL standard, but even so, I don't understand why. Can somebody help me understand the need for this convention?

    Read the article

  • How to write simple Where Clause for dynamic filtering in linq when we use groups in join

    - by malik
    I have simple search page i want to filter the results. var TransactionStats = from trans in context.ProductTransactionSet.Include("SPL") select new { trans.InvoiceNo, ProductGroup = from tranline in trans.ProductTransactionLines group tranline by tranline.ProductTransaction.TransactionID into ProductGroupDetil select new { TransactionDateTime = ProductGroupDetil.Select (Content => Content.TransactionDateTime) } }; I want to use TransactionDateTime in where clause when required. if (_FilterCrieteria.DateFrom.HasValue) { TransactionStats.Where ( a => a.ProductGroup.Where ( dt => dt.DateofTransaction >= _FilterCrieteria.DateFrom && dt.DateofTransaction >= _FilterCrieteria.DateFrom ) ) } Can any one correct the syntax?

    Read the article

  • Inspiration and influence of the else clause of loop statements?

    - by Aristide
    Python offers an optional loop-else clause which is executed if and only if the loop is not terminated by a break. (In other words, the condition fails for a while-loop or the iterator is exhausted for a for-loop.) Does this loop-else construct originate from another language? (Either theoretical or actually implemented.) Has it been taken up in any newer language? Maybe I should ask the former of Guido, but surely he is too busy for such a futile inquiry. ;-) Related discussion and examples: Pythonic ways to use ‘else’ in a for loop

    Read the article

  • Using Where method in Linq 2 Entities with OR clause.

    - by Dani
    I want to use Where method in Linq 2 entities that will be equal to this userRepository.Users.Where(u=>u.RoleID == 1 || u=>u.RoldID == 2).Select(o => new SelectListItem { Text = o.Role.RoleName, Value = o.RoleID.ToString() }).ToList(); The problem of course is in Where(u=u.RoleID == 1 || u=u.RoldID == 2) The problem is that I don't know how to use WHERE method with OR inside the WHERE clause. any ideas (the code above will not compile of-course b/c of the lambda expression. userRepository.Users returns an list of Users entities. I guess that and can be done using concatenation of .Where().Where() but I need an OR.

    Read the article

  • Query Execution Plan - When is the Where clause executed?

    - by Alex
    I have a query like this (created by LINQ): SELECT [t0].[Id], [t0].[CreationDate], [t0].[CreatorId] FROM [dbo].[DataFTS]('test', 100) AS [t0] WHERE [t0].[CreatorId] = 1 ORDER BY [t0].[RANK] DataFTS is a full-text search table valued function. The query execution plan looks like this: SELECT (0%) - Sort (23%) - Nested Loops (Inner Join) (1%) - Sort (Top N Sort) (25%) - Stream Aggregate (0%) - Stream Aggregate (0%) - Compute Scalar (0%) - Table Valued Function (FullTextMatch) (13%) | | - Clustered Index Seek (38%) Does this mean that the WHERE clause ([CreatorId] = 1) is executed prior to the TVF ( full text search) or after the full text search? Thank you.

    Read the article

  • Unknown Column 'template' in where clause, while using a stored procedure.

    - by sai
    Hello, I have a stored procedure, which has got executed without any errors, but gives me an error "#1054: Unknown column 'templateName' in where clause" when I run it. The stored procedure is delimiter // DROP PROCEDURE getData// CREATE DEFINER=root@localhost PROCEDURE getData(IN templateName VARCHAR(45),IN templateVersion VARCHAR(45),IN userId VARCHAR(45)) BEGIN set @version = CONCAT("SELECT 'saveOEMsData_answersVersion' FROM saveOEMsData where 'saveOEMsData_templateName' = ",templateName," and 'saveOEMsData_templateVersion' = ",templateVersion," and 'saveOEMsData_userId'= ",userId); PREPARE s1 from @version; EXECUTE S1; END // delimiter ; Now I call it using "call getData('templateName','1','285');". And whenever I call it, I get the mentioned error. What could the problem be?? It surely is syntactical, I have been reading the mysql manuals for 2 days and have come out without!! Any help would be great!! Thanks

    Read the article

  • Linq to SQL - How to compare against a collection in the where clause?

    - by Sgraffite
    I'd like to compare against an IEnumerable collection in my where clause. Do I need to manually loop through the collection to pull out the column I want to compare against, or is there a generic way to handle this? I want something like this: public IEnumerable<Cookie> GetCookiesForUsers(IEnumerable<User> Users) { var cookies = from c in db.Cookies join uc in db.UserCookies on c.CookieID equals uc.CookieID join u in db.Users on uc.UserID equals u.UserID where u.UserID.Equals(Users.UserID) select c; return cookies.ToList(); } I'm used to using the lambda Linq to SQL syntax, but I decided to try the SQLesque syntax since I was using joins this time. What is a good way to do this?

    Read the article

  • Can't use MySQL extract() function in the WHERE clause.

    - by UkraineTrain
    I've run the following query: UPDATE main_table, reference_table SET main_table.calc_column = (CASE WHEN main_table.incr = "6AM" THEN reference_table.col1+reference_table.col2+... WHEN main_table.incr = "12AM" THEN reference_table.col7+reference_table.col8+... WHEN main_table.incr = "6PM" THEN reference_table.col13+reference_table.col14+... ELSE reference_table.col19+reference_table.col20+...) WHERE main_table.month = extract(month from reference_table.thedate) AND main_table.day = extract(day from reference_table.thedate) I've used extract() function since my reference_table doesn't have month and day columns but has the date column named thedate. I've used the extract() function on the reference_table many times before successfully, so, I know that there's nothing wrong with my extract function syntax. However, in this instance, MySQL complains. It probably has to do with the fact that I've used in the WHERE clause. I know that this issue could get fixed if I added the month and day columns to the reference_table to avoid using the extract() function. However, I'm very reluctant to do that and would like to avoid it. How can I make it work?`

    Read the article

  • Is an index required for columns in ON clause?

    - by newbie
    Do I have to create an index on columns referenced in Joins? E.g. SELECT * FROM left_table INNER JOIN right_table ON left_table.foo = right_table.bar WHERE ... Should I create indexes on left_table(foo), right_table(bar), or both? I noticed different results when I used EXPLAIN (Postgresql) with and without indexes and switching around the order of the comparison (right_table.bar = left_table.foo) I know for sure that indexes are used for the left of the WHERE clause but I am wondering whether I need indexes for columns listed in ON clauses.

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • why DbCommandBuilder (Oracle) produces weird WHERE-clause to UpdateCommand in C# / ADO.NET 2.0?

    - by matti
    I have a table HolidayHome in oracle db which has unique db index on Id (I haven't specified this in the code in any way for adapter/table/dataset, don't know if i should/can). DbDataAdapter.SelectCommand is like this: SELECT Id, ExtId, Label, Location1, Location2, Location3, Location4, ClassId, X, Y, UseType FROM HolidayHome but UpdateCommand generated by DbCommandBuilder has very weird where clause: UPDATE HOLIDAYHOME SET ID = :p1, EXTID = :p2, LABEL = :p3, LOCATION1 = :p4, LOCATION2 = :p5, LOCATION3 = :p6, LOCATION4 = :p7, CLASSID = :p8, X = :p9, Y = :p10, USETYPE = :p11 WHERE ((ID = :p12) AND ((:p13 = 1 AND EXTID IS NULL) OR (EXTID = :p14)) AND ((:p15 = 1 AND LABEL IS NULL) OR (LABEL = :p16)) AND ((:p17 = 1 AND LOCATION1 IS NULL) OR (LOCATION1 = :p18)) AND ((:p19 = 1 AND LOCATION2 IS NULL) OR (LOCATION2 = :p20)) AND ((:p21 = 1 AND LOCATION3 IS NULL) OR (LOCATION3 = :p22)) AND ((:p23 = 1 AND LOCATION4 IS NULL) OR (LOCATION4 = :p24)) AND (CLASSID = :p25) AND (X = :p26) AND (Y = :p27) AND (USETYPE = :p28)) the code is like this: static bool CreateInsertUpdateDeleteCmds(DbDataAdapter dataAdapter) { DbCommandBuilder builder = _trgtProvFactory.CreateCommandBuilder(); builder.DataAdapter = dataAdapter; // Get the insert, update and delete commands. dataAdapter.InsertCommand = builder.GetInsertCommand(); dataAdapter.UpdateCommand = builder.GetUpdateCommand(); dataAdapter.DeleteCommand = builder.GetDeleteCommand(); } what to do? The UpdateCommand is utter madness. Thanks & Best Regards: Matti

    Read the article

  • why DbCommandBuilder (Oracle) produces weird WHERE-clause to UpdateCommand?

    - by matti
    I have a table HolidayHome in oracle db which has unique db index on Id (I haven't specified this in the code in any way for adapter/table/dataset, don't know if i should/can). DbDataAdapter.SelectCommand is like this: SELECT Id, ExtId, Label, Location1, Location2, Location3, Location4, ClassId, X, Y, UseType FROM HolidayHome but UpdateCommand generated by DbCommandBuilder has very weird where clause: UPDATE HOLIDAYHOME SET ID = :p1, EXTID = :p2, LABEL = :p3, LOCATION1 = :p4, LOCATION2 = :p5, LOCATION3 = :p6, LOCATION4 = :p7, CLASSID = :p8, X = :p9, Y = :p10, USETYPE = :p11 WHERE ((ID = :p12) AND ((:p13 = 1 AND EXTID IS NULL) OR (EXTID = :p14)) AND ((:p15 = 1 AND LABEL IS NULL) OR (LABEL = :p16)) AND ((:p17 = 1 AND LOCATION1 IS NULL) OR (LOCATION1 = :p18)) AND ((:p19 = 1 AND LOCATION2 IS NULL) OR (LOCATION2 = :p20)) AND ((:p21 = 1 AND LOCATION3 IS NULL) OR (LOCATION3 = :p22)) AND ((:p23 = 1 AND LOCATION4 IS NULL) OR (LOCATION4 = :p24)) AND (CLASSID = :p25) AND (X = :p26) AND (Y = :p27) AND (USETYPE = :p28)) all these fields that have like: ((:p17 = 1 AND LOCATION1 IS NULL) OR (LOCATION1 = :p18)) are defined in oracle db like this: LOCATION1 VARCHAR2(30) so they allow null values. the code looks like this: static bool CreateInsertUpdateDeleteCmds(DbDataAdapter dataAdapter) { DbCommandBuilder builder = _trgtProvFactory.CreateCommandBuilder(); builder.DataAdapter = dataAdapter; // Get the insert, update and delete commands. dataAdapter.InsertCommand = builder.GetInsertCommand(); dataAdapter.UpdateCommand = builder.GetUpdateCommand(); dataAdapter.DeleteCommand = builder.GetDeleteCommand(); } what to do? The UpdateCommand is utter madness. Thanks & Best Regards: Matti

    Read the article

  • How to avoid overlapping date ranges when using a grouping clause?

    - by k rey
    I have a situation where I need to find time spans between value changes. I tried a simple group by clause but it eliminates overlapping changes. Consider the following example: create table #items ( code varchar(4) , class varchar(4) , txdate datetime ) insert into #items (code, class, txdate) values ('A', 'C', '2010-01-01'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-02'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-03'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-04'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-05'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-06'); insert into #items (code, class, txdate) values ('A', 'C', '2010-01-07'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-08'); insert into #items (code, class, txdate) values ('A', 'D', '2010-01-09'); select code , class , min(txdate) mindate , max(txdate) maxdate from #items group by code, class This returns the following results (notice the overlapping date ranges): |code|class|mindate |maxdate | ---------------------------------- |A |C |2010-01-01|2010-01-07| |A |D |2010-01-04|2010-01-09| I would like to have the query return the following: |code|class|mindate |maxdate | ---------------------------------- |A |C |2010-01-01|2010-01-03| |A |D |2010-01-04|2010-01-05| |A |C |2010-01-06|2010-01-07| |A |D |2010-01-08|2010-01-09| Any ideas and suggestions?

    Read the article

  • How to add second JOIN clause in Linq To Sql?

    - by Refracted Paladin
    I am having a lot of trouble coming up with the Linq equivalent of this legacy stored procedure. The biggest hurdle is it doesn't seem to want to let me add a second 'clause' on the join with tblAddress. I am getting a Cannot resolve method... error. Can anyone point out what I am doing wrong? Below is, first, the SPROC I need to convert and, second, my LINQ attempt so far; which is FULL OF FAIL! Thanks SELECT dbo.tblPersonInsuranceCoverage.PersonInsuranceCoverageID, dbo.tblPersonInsuranceCoverage.EffectiveDate, dbo.tblPersonInsuranceCoverage.ExpirationDate, dbo.tblPersonInsuranceCoverage.Priority, dbo.tblAdminInsuranceCompanyType.TypeName AS CoverageCategory, dbo.tblBusiness.BusinessName, dbo.tblAdminInsuranceType.TypeName AS TypeName, CASE WHEN dbo.tblAddress.AddressLine1 IS NULL THEN '' ELSE dbo.tblAddress.AddressLine1 END + ' ' + CASE WHEN dbo.tblAddress.CityName IS NULL THEN '' ELSE '<BR>' + dbo.tblAddress.CityName END + ' ' + CASE WHEN dbo.tblAddress.StateID IS NULL THEN '' WHEN dbo.tblAddress.StateID = 'ns' THEN '' ELSE dbo.tblAddress.StateID END AS Address FROM dbo.tblPersonInsuranceCoverage LEFT OUTER JOIN dbo.tblInsuranceCompany ON dbo.tblPersonInsuranceCoverage.InsuranceCompanyID = dbo.tblInsuranceCompany.InsuranceCompanyID LEFT OUTER JOIN dbo.tblBusiness ON dbo.tblBusiness.BusinessID = dbo.tblInsuranceCompany.BusinessID LEFT OUTER JOIN dbo.tblAddress ON dbo.tblAddress.BusinessID = dbo.tblBusiness.BusinessID and tblAddress.AddressTypeID = 'b' LEFT OUTER JOIN dbo.tblAdminInsuranceCompanyType ON dbo.tblPersonInsuranceCoverage.InsuranceCompanyTypeID = dbo.tblAdminInsuranceCompanyType.InsuranceCompanyTypeID LEFT OUTER JOIN dbo.tblAdminInsuranceType ON dbo.tblPersonInsuranceCoverage.InsuranceTypeID = dbo.tblAdminInsuranceType.InsuranceTypeID WHERE tblPersonInsuranceCoverage.PersonID = @PersonID var coverage = from insuranceCoverage in context.tblPersonInsuranceCoverages where insuranceCoverage.PersonID == personID join insuranceCompany in context.tblInsuranceCompanies on insuranceCoverage.InsuranceCompanyID equals insuranceCompany.InsuranceCompanyID join address in context.tblAddresses on insuranceCompany.tblBusiness.BusinessID equals address.BusinessID where address.AddressTypeID = 'b' select new { insuranceCoverage.PersonInsuranceCoverageID, insuranceCoverage.EffectiveDate, insuranceCoverage.ExpirationDate, insuranceCoverage.Priority, CoverageCategory = insuranceCompany.tblAdminInsuranceCompanyType.TypeName, insuranceCompany.tblBusiness.BusinessName, TypeName = insuranceCoverage.InsuranceTypeID, Address = };

    Read the article

  • Index question: Select * with WHERE clause. Where and how to create index

    - by Mestika
    Hi, I’m working on optimizing some of my queries and I have a query that states: select * from SC where c_id ="+c_id” The schema of ** SC** looks like this: SC ( c_id int not null, date_start date not null, date_stop date not null, r_t_id int not null, nt int, t_p decimal, PRIMARY KEY (c_id, r_t_id, date_start, date_stop)); My immediate bid on how the index should be created is a covering index in this order: INDEX(c_id, date_start, date_stop, nt, r_t_id, t_p) The reason for this order I base on: The WHERE clause selects from c_id thus making it the first sorting order. Next, the date_start and date_stop to specify a sort of “range” to be defined in these parameters Next, nt because it will select the nt Next the r_t_id because it is a ID for a specific type of my r_t table And last the t_p because it is just a information. I don’t know if it is at all necessary to order it in a specific way when it is a SELECT ALL statement. I should say, that the SC is not the biggest table. I can say how many rows it contains but a estimate could be between <10 and 1000. The next thing to add is, that the SC, in different queries, inserts the data into the SC, and I know that indexes on tables which have insertions can be cost ineffective, but can I somehow create a golden middle way to effective this performance. Don't know if it makes a different but I'm using IBM DB2 version 9.7 database Sincerely Mestika

    Read the article

  • Make mysqldump output USE statements or full table names when dumping a single table with where clause

    - by tobyodavies
    Is it possible to get mysqldump to output USE statements for a single (partial) table dump? I've already got some scripts that I'd like to reuse which run mysqldump with some arguments and apply them to a remote server. However, since I haven't bothered to parse all the arguments to mysqldump, and there is no USE in the dump, the remote server is saying no database selected. I'm a programmer more than anything else, so I can easily use sed to modify the dump before applying it in the worst case, but those scripts won't allow me to do this as I don't have access to the dump between creation and application. EDIT: the ability to output fully qualified table names may also solve my problem

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #049

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Two Connections Related Global Variables Explained – @@CONNECTIONS and @@MAX_CONNECTIONS @@CONNECTIONS Returns the number of attempted connections, either successful or unsuccessful since SQL Server was last started. @@MAX_CONNECTIONS Returns the maximum number of simultaneous user connections allowed on an instance of SQL Server. The number returned is not necessarily the number currently configured. Query Editor – Microsoft SQL Server Management Studio This post may be very simple for most of the users of SQL Server 2005. Earlier this year, I have received one question many times – Where is Query Analyzer in SQL Server 2005? I wrote small post about it and pointed many users to that post – SQL SERVER – 2005 Query Analyzer – Microsoft SQL SERVER Management Studio. Recently I have been receiving similar question. OUTPUT Clause Example and Explanation with INSERT, UPDATE, DELETE SQL Server 2005 has a new OUTPUT clause, which is quite useful. OUTPUT clause has access to insert and deleted tables (virtual tables) just like triggers. OUTPUT clause can be used to return values to client clause. OUTPUT clause can be used with INSERT, UPDATE, or DELETE to identify the actual rows affected by these statements. OUTPUT clause can generate a table variable, a permanent table, or temporary table. Even though, @@Identity will still work with SQL Server 2005, however I find the OUTPUT clause very easy and powerful to use. Let us understand the OUTPUT clause using an example. Find Name of The SQL Server Instance Based on database server stored procedures has to run different logic. We came up with two different solutions. 1) When database schema is very much changed, we wrote completely new stored procedure and deprecated older version once it was not needed. 2) When logic depended on Server Name we used global variable @@SERVERNAME. It was very convenient while writing migrating script which depended on the server name for the same database. Explanation of TRY…CATCH and ERROR Handling With RAISEERROR Function One of the developers at my company thought that we can not use the RAISEERROR function in new feature of SQL Server 2005 TRY… CATCH. When asked for an explanation he suggested SQL SERVER – 2005 Explanation of TRY… CATCH and ERROR Handling article as excuse suggesting that I did not give example of RAISEERROR with TRY…CATCH. We all thought it was funny. Just to keep records straight, TRY… CATCH can sure use RAISEERROR function. Different Types of Cache Objects Serveral kinds of objects can be stored in the procedure cache: Compiled Plans: When the query optimizer finishes compiling a query plan, the principal output is compiled plan. Execution contexts: While executing a compiled plan, SQL Server has to keep track of information about the state of execution. Cursors: Cursors track the execution state of server-side cursors, including the cursor’s current location within a resultset. Algebrizer trees: The Algebrizer’s job is to produce an algebrizer tree, which represents the logic structure of a query. Open SSMS From Command Prompt – sqlwb.exe Example This article is written by request and suggestion of Sr. Web Developer at my organization. Due to the nature of this article most of the content is referred from Book On-Line. sqlwbcommand prompt utility which opens SQL Server Management Studio. Squib command does not run queries from the command prompt. sqlcmd utility runs queries from command prompt, read for more information. 2008 Puzzle – Solution – Computed Columns Datatype Explanation Just a day before I wrote article SQL SERVER – Puzzle – Computed Columns Datatype Explanation which was inspired by SQL Server MVP Jacob Sebastian. I suggest that before continuing this article read the original puzzle question SQL SERVER – Puzzle – Computed Columns Datatype Explanation.The question was if the computed column was of datatype TINYINT how to create a Computed Column of datatype INT? 2008 – Find If Index is Being Used in Database It is very often I get a query that how to find if any index is being used in the database or not. If any database has many indexes and not all indexes are used it can adversely affect performance. If the number of indices are higher it reduces the INSERT / UPDATE / DELETE operation but increase the SELECT operation. It is recommended to drop any unused indexes from table to improve the performance. 2009 Interesting Observation – Execution Plan and Results of Aggregate Concatenation Queries If you want to see what’s going on here, I think you need to shift your point of view from an implementation-centric view to an ANSI point of view. ANSI does not guarantee processing the order. Figure 2 is interesting, but it will be potentially misleading if you don’t understand the ANSI rule-set SQL Server operates under in most cases. Implementation thinking can certainly be useful at times when you really need that multi-million row query to finish before the backup fire off, but in this case, it’s counterproductive to understanding what is going on. SQL Server Management Studio and Client Statistics Client Statistics are very important. Many a times, people relate queries execution plan to query cost. This is not a good comparison. Both parameters are different, and they are not always related. It is possible that the query cost of any statement is less, but the amount of the data returned is considerably larger, which is causing any query to run slow. How do we know if any query is retrieving a large amount data or very little data? 2010 I encourage all of you to go through complete series and write your own on the subject. If you write an article and send it to me, I will publish it on this blog with due credit to you. If you write on your own blog, I will update this blog post pointing to your blog post. SQL SERVER – ORDER BY Does Not Work – Limitation of the View 1 SQL SERVER – Adding Column is Expensive by Joining Table Outside View – Limitation of the View 2 SQL SERVER – Index Created on View not Used Often – Limitation of the View 3 SQL SERVER – SELECT * and Adding Column Issue in View – Limitation of the View 4 SQL SERVER – COUNT(*) Not Allowed but COUNT_BIG(*) Allowed – Limitation of the View 5 SQL SERVER – UNION Not Allowed but OR Allowed in Index View – Limitation of the View 6 SQL SERVER – Cross Database Queries Not Allowed in Indexed View – Limitation of the View 7 SQL SERVER – Outer Join Not Allowed in Indexed Views – Limitation of the View 8 SQL SERVER – SELF JOIN Not Allowed in Indexed View – Limitation of the View 9 SQL SERVER – Keywords View Definition Must Not Contain for Indexed View – Limitation of the View 10 SQL SERVER – View Over the View Not Possible with Index View – Limitations of the View 11 SQL SERVER – Get Query Running in Session I was recently looking for syntax where I needed a query running in any particular session. I always remembered the syntax and ha d actually written it down before, but somehow it was not coming to mind quickly this time. I searched online and I ended up on my own article written last year SQL SERVER – Get Last Running Query Based on SPID. I felt that I am getting old because I forgot this really simple syntax. Find Total Number of Transaction on Interval In one of my recent Performance Tuning assignments I was asked how do someone know how many transactions are happening on a server during certain interval. I had a handy script for the same. Following script displays transactions happened on the server at the interval of one minute. You can change the WAITFOR DELAY to any other interval and it should work. 2011 Here are two DMV’s which are newly introduced in SQL Server 2012 and provides vital information about SQL Server. DMV – sys.dm_os_volume_stats – Information about operating system volume DMV – sys.dm_os_windows_info – Information about Operating System SQL Backup and FTP – A Quick and Handy Tool I have used this tool extensively since 2009 at numerous occasion and found it to be very impressive. What separates it from the crowd the most – it is it’s apparent simplicity and speed. When I install SQLBackupAndFTP and configure backups – all in 1 or 2 minutes, my clients are always impressed. Quick Note about JOIN – Common Questions and Simple Answers In this blog post we are going to talk about join and lots of things related to the JOIN. I recently started office hours to answer questions and issues of the community. I receive so many questions that are related to JOIN. I will share a few of the same over here. Most of them are basic, but note that the basics are of great importance. 2012 Importance of User Without Login Question: “In recent version of SQL Server we can create user without login. What is the use of it?” Great question indeed. Let me first attempt to answer this question but after reading my answer I need your help. I want you to help him as well with adding more value to it. Preserve Leading Zero While Coping to Excel from SSMS Earlier I wrote two articles about how to efficiently copy data from SSMS to Excel. Since I wrote that post there are plenty of interest generated on this subject. There are a few questions I keep on getting over this subject. One of the question is how to get the leading zero preserved while copying the data from SSMS to Excel. Well it is almost the same way as my earlier post SQL SERVER – Excel Losing Decimal Values When Value Pasted from SSMS ResultSet. The key here is in EXCEL and not in SQL Server. Solution – 2 T-SQL Puzzles – Display Star and Shortest Code to Display 1 Earlier on this blog we had asked two puzzles. The response from all of you is nothing but Amazing. I have received 350+ responses. Many are valid and many were indeed something I had not thought about it. I strongly suggest you read all the puzzles and their answers here - trust me if you start reading the comments you will not stop till you read every single comment. Seriously trust me on it. Personally I have learned a lot from it. Identify Most Resource Intensive Queries – SQL in Sixty Seconds #028 – Video http://www.youtube.com/watch?v=TvlYy-TGaaA Importance of User Without Login – T-SQL Demo Script Earlier I wrote a blog post about SQL SERVER – Importance of User Without Login and my friend and SQL Expert Vinod Kumar has written excellent follow up blog post about Contained Databases inside SQL Server 2012. Now lots of people asked me if I can also explain the same concept again so here is the small demonstration for it. Let me show you how login without user can help. Before we continue on this subject I strongly recommend that you read my earlier blog post here. In following demo I am going to demonstrate following situation. Login using the System Admin account Create a user without login Checking Access Impersonate the user without login Checking Access Revert Impersonation Give Permission to user without login Impersonate the user without login Checking Access Revert Impersonation Clean up Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Can PetaPoco populate an object using a stored procedure with a join clause?

    - by Mark Kadlec
    I have a stored procedure that does something similar to: SELECT a.TaskId, b.CompanyCode FROM task a JOIN company b ON b.CompanyId = a.CompanyId; I have an object called TaskItem that has the TaskId and CompanyCode properties, but when I execute the following (which I would have assumed worked): var masterDatabase = new Database("MasterConnectionString"); var s = PetaPoco.Sql.Builder.Append("EXEC spGetTasks @@numberOfTasks = @0", numberOfTasks); var tasks = masterDatabase.Query<Task>(s); The problem is that the CompanyCode column does not exist in the task table, I did a trace and it seems that PetaPoco is trying to select all the properties from the task table and populating using the stored procedure. How can I use PetaPoco to simply populate the list of task objects with the results of the stored procedure?

    Read the article

  • How do I escape a LIKE clause using NHibernate Criteria?

    - by Jon Seigel
    The code we're using is straight-forward in this part of the search query: myCriteria.Add( Expression.InsensitiveLike("Code", itemCode, MatchMode.Anywhere)); and this works fine in a production environment. The issue is that one of our clients has item codes that contain % symbols which this query needs to match. The resulting SQL output from this code is similar to: SELECT ... FROM ItemCodes WHERE ... AND Code LIKE '%ItemWith%Symbol%' which clearly explains why they're getting some odd results in item searches. Is there a way to enable escaping using the programmatic Criteria methods? Addendum: We're using a slightly old version of NHibernate, 2.1.0.4000 (current as of writing is 2.1.2.4853), but I checked the release notes, and there was no mention of a fix for this. I didn't find any open issue in their bugtracker either. We're using SQL Server, so I can escape the special characters (%, _, [, and ^) in code really easily, but the point of us using NHibernate was to make our application database-engine-independent as much as possible. Neither Restrictions.InsensitiveLike() nor HqlQueryUtil.GetLikeExpr() escape their inputs, and removing the MatchMode parameter makes no difference as far as escaping goes.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >