Search Results

Search found 8547 results on 342 pages for 'hash join'.

Page 13/342 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • SQL Join a View with a Table

    - by gamerzfuse
    CREATE VIEW qtyorderedview AS SELECT titleditors.title_id, titleditors.ed_id, salesdetails.title_id, salesdetails.qty_shipped FROM titleditors, salesdetails WHERE titleditors.title_id = salesdetails.title_id I am using the above SQL statement to create a view. I need to show Editors First Name, Last Name, City where they shipped more than 50 books. The three tables I have are: create table editors ( ed_id char(11), ed_lname varchar(20), ed_fname varchar(20), ed_pos varchar(12), phone varchar(10), address varchar(30), city varchar(20), state char(2), zip char(5), ed_boss char(11)); create table titleditors ( ed_id char(11), title_id char(6), ed_ord integer); create table salesdetails ( sonum integer, qty_ordered integer, qty_shipped integer, title_id char(6), date_shipped date); Can anyone tell me what the second Join code would be to create this result? My first view works fine, but I don't know how to join it to the second table to achieve this result? I didn't make the tables, I just have to work with what I was given. Thanks in advance!

    Read the article

  • SQL joins "going up" two tables

    - by blcArmadillo
    I'm trying to create a moderately complex query with joins: SELECT `history`.`id`, `parts`.`type_id`, `serialized_parts`.`serial`, `history_actions`.`action`, `history`.`date_added` FROM `history_actions`, `history` LEFT OUTER JOIN `parts` ON `parts`.`id` = `history`.`part_id` LEFT OUTER JOIN `serialized_parts` ON `serialized_parts`.`parts_id` = `history`.`part_id` WHERE `history_actions`.`id` = `history`.`action_id` AND `history`.`unit_id` = '1' ORDER BY `history`.`id` DESC I'd like to replace `parts`.`type_id` in the SELECT statement with `part_list`.`name` where the relationship I need to enforce between the two tables is `part_list`.`id` = `parts`.`type_id`. Also I have to use joins because in some cases `history`.`part_id` may be NULL which obviously isn't a valid part id. How would I modify the query to do this?

    Read the article

  • SQL: many-to-many relationship, IN condition

    - by Maarten
    I have a table called transactions with a many-to-many relationship to items through the items_transactions table. I want to do something like this: SELECT "transactions".* FROM "transactions" INNER JOIN "items_transactions" ON "items_transactions".transaction_id = "transactions".id INNER JOIN "items" ON "items".id = "items_transactions".item_id WHERE (items.id IN (<list of items>)) But this gives me all transactions that have one or more of the items in the list associated with it and I only want it to give me the transactions that are associated with all of those items. Any help would be appreciated.

    Read the article

  • django left join with null

    - by SledgehammerPL
    The model: class Product(models.Model): name = models.CharField(max_length = 128) def __unicode__(self): return self.name class Receipt(models.Model): name = models.CharField(max_length=128) components = models.ManyToManyField(Product, through='ReceiptComponent') class Admin: pass def __unicode__(self): return self.name class ReceiptComponent(models.Model): product = models.ForeignKey(Product) receipt = models.ForeignKey(Receipt) quantity = models.FloatField(max_length=9) unit = models.ForeignKey(Unit) def __unicode__(self): return unicode(self.quantity!=0 and self.quantity or '') + ' ' + unicode(self.unit) + ' ' + self.product.genitive The idea: there are a components on stock. I'd like to find out which recipes I can made with components which I have. It's not easy - but possible - I made a SQL view, which gets the solution. But I'm learning python and Django so I'd like to make it Django-style ;D The concept of solution: get the set of recipes which has at last one component: list_of_available_components = ReceiptComponent.objects.filter(product__in=list_of_available_products).distinct() list_of_related_receipts = Receipt.objects.filter(receiptcomponent__in = list_of_available_components).distinct() get recipes (from list_of_related_receipts) which has not at last one component list_of_incomplete_recipes = (SELECT * FROM drinkbook_receiptcomponent LEFT JOIN drinkstore_stock_products USING(product_id) WHERE drinkstore_stock_products.stock_id IS NULL AND receipt_id IN (SELECT receipt_id FROM drinkbook_receiptcomponent JOIN drinkstore_stock_products USING(product_id))) get recipes (from list_of_related_receipts) which are not in "list_of_incomplete_recipes"

    Read the article

  • mySQL Query JOIN in same table

    - by jeerose
    Table structure goes something like this: Table: Purchasers Columns: id | organization | city | state Table: Events Columns: id | purchaser_id My query: SELECT purchasers.*, events.id AS event_id FROM purchasers INNER JOIN events ON events.purchaser_id = purchasers.id WHERE purchasers.id = '$id' What I would like to do, is obviously to select entries by their id from the purchasers table and join from events. That's the easy part. I can also easily to another query to get other purchasers with the same organization, city and state (there are multiples) but I'd like to do it all in the same query. Is there a way I can do this? In short, grab purchasers by their ID but then also select other purchasers that have the same organization, city and state. Thanks.

    Read the article

  • Join a single row in one table to n random rows in another

    - by Einar Egilsson
    Is it possible to make a join in SQL server that joins each row from table A to n random rows in another? For example, say I have a Customer table, a Product table and an Order table. I want to join each customer to 5 random products and insert these rows into the order table. (And each customer should be joined to 5 random rows of his own, I don't want all customers joining to the same 5 rows). Is this possible? I'm using SQL Server 2005 and it's fine if the solution is specific to that. This is a weird requirement but I'm basically making a small data generator to generate some random data.

    Read the article

  • How can I join on a CSV varchar?

    - by mgroves
    I have a varchar field that contains a string like "10,11,12,13". How can I use that CSV string to join to another table with those IDs? Here's the approach I'm taking now: select * from SomeTable a WHERE (',' + @csvString + ',') LIKE '%,' + CONVERT(varchar(25), a.ID) + ',%' Where @csvString is "10,11,12,...". I intend to use this method as a join condition as well. That method works, but it's rather slow (using CAST doesn't improve the speed). I understand that having CSVs in the database like that is usually a very silly idea in most cases, but there's nothing I can do about that.

    Read the article

  • SELECT subset from two tables and LEFT JOIN results

    - by Doctor Trout
    Hi all, I'm trying to write a bit of SQL for SQLITE that will take a subset from two tables (TableA and TableB) and then perform a LEFT JOIN. This is what I've tried, but this produces the wrong result: Select * from TableA Left Join TableB using(key) where TableA.key2 = "xxxx" AND TableB.key3 = "yyyy" This ignore cases where key2="xxxx" but key3 != "yyyy". I want all the rows from TableA that match my criteria whether or not their corresponding value in TableB matches, but only those rows from TableB that match both conditions. I did manage to solve this by using a VIEW, but I'm sure there must be a better way of doing this. It's just beginning to drive me insane tryng to solve it now. (Thanks for any help, hope I've explained this well enough).

    Read the article

  • Need some help with a join on ContentProviders

    - by Pentium10
    The documentation says Columns from the associated aggregated contact are also available through an implicit join. What's that implicit join? `Join with Contacts` String LOOKUP_KEY read-only See ContactsContract.Contacts String DISPLAY_NAME read-only See ContactsContract.Contacts long PHOTO_ID read-only See ContactsContract.Contacts. int IN_VISIBLE_GROUP read-only See ContactsContract.Contacts. int HAS_PHONE_NUMBER read-only See ContactsContract.Contacts. I am querying ContactsContract.Data, and I need to access as where clauses on the query IN_VISIBLE_GROUP and HAS_PHONE_NUMBER, that are defined in ContactsContract.Contacts. How can I make this possible?

    Read the article

  • SQL LEFT JOIN help

    - by Stolz
    My scenario: There are 3 tables for storing tv show information; season, episode and episode_translation. My data: There are 3 seasons, with 3 episodes each one, but there is only translation for one episode. My objetive: I want to get a list of all the seasons and episodes for a show. If there is a translation available in a specified language, show it, otherwise show null. My attempt to get serie 1 information in language 1: SELECT season_number AS season,number AS episode,name FROM season NATURAL JOIN episode NATURAL LEFT JOIN episode_trans WHERE id_serie=1 AND id_lang=1 ORDER BY season_number,number result: +--------+---------+--------------------------------+ | season | episode | name | +--------+---------+--------------------------------+ | 3 | 3 | Episode translated into lang 1 | +--------+---------+--------------------------------+ expected result +-----------------+--------------------------------+ | season | episode| name | +-----------------+--------------------------------+ | 1 | 1 | NULL | | 1 | 2 | NULL | | 1 | 3 | NULL | | 2 | 1 | NULL | | 2 | 2 | NULL | | 2 | 3 | NULL | | 3 | 1 | NULL | | 3 | 2 | NULL | | 3 | 3 | Episode translated into lang 1 | +--------+--------+--------------------------------+ Full DB dump http://pastebin.com/Y8yXNHrH

    Read the article

  • MySQL join headaches, please help!

    - by Andrew Heath
    Ok, I've hit the wall here and need some help. Sample tables are as follows: SCENARIO_NATIONS [scenID] [side] [nation] scen001 1 Germany scen001 2 Britain scen001 2 Canada SCENARIO_NEEDUNITS [scenID] [unitID] scen001 0001 scen001 0003 scen001 0107 scen001 0258 scen001 0759 UNIT_BASIC_DATA [unitID] [nation] [name] 0001 Germany Mortars 0003 Germany Infantry 0107 Britain Lt 0258 Britain Infantry 0759 Canada Kilted Yaksmen Goal: given a scenID, pull a list of units from the database sorted by side, nation, name. I can do everything except for the side inclusion with: SELECT scenario_needunits.scenID, unit_basic_data.nation, unit_basic_data.name FROM scenario_needunits LEFT OUTER JOIN unit_basic_data ON scenario_needunits.unitID=unit_basic_data.unitID WHERE scenario_needunits.scenID='scen001' ORDER BY unit_basic_data.nation ASC, unit_basic_data.name ASC I've tried just dropping the SCENARIO_NATIONS table in as a LEFT OUTER JOIN on scenID but what ends up happening is that ALL units come back with a side of 1 because that's always the first side listed for the scenID in the SCENARIO_NATIONS table. Conceptually, what I think needs to happen is SCENARIO_NATIONS must be joined to both the scenID (to restrict it to just that scenario) and to each unit's nation but I don't have any idea how to do that and my Google-fu is inadequate. :-/

    Read the article

  • 2 Select or 1 Join query ?

    - by xRobot
    I have 2 tables: book ( id, title, age ) ---- 100 milions of rows author ( id, book_id, name, born ) ---- 10 millions of rows Now, supposing I have a generic id of a book. I need to print this page: Title: mybook authors: Tom, Graham, Luis, Clarke, George So... what is the best way to do this ? 1) Simple join like this: Select book.title, author.name From book, author WHERE ( author.book_id = book.id ) AND ( book.id = 342 ) 2) For avoid the join, I could make 2 simple query: Select title FROM book WHERE id = 342 Select name FROM author WHERE book_id = 342 What is the most efficient way ?

    Read the article

  • MySQL Join issue

    - by mouthpiec
    Hi, I have the following tables: --table sportactivity-- sport_activity_id, home_team_fk, away_team_fk, competition_id_fk, date, time (tuple example) - 1, 33, 41, 5, 2010-04-14, 05:40:00 --table teams-- team_id, team_name (tuple example) - 1, Algeria Now I have the following SQL statment that I use to extract Team A vs Team B SELECT sport_activity_id, T1.team_name AS TeamA, T2.team_name AS TeamB, DATE_FORMAT( DATE, '%d/%m/%Y' ) AS DATE, DATE_FORMAT( TIME, '%H:%i' ) AS TIME FROM sportactivity JOIN teams T1 ON home_team_fk = T1.team_id JOIN teams T2 ON ( away_team_fk = T2.team_id OR away_team_fk = '0' ) WHERE DATE( DATE ) >= CURDATE( ) ORDER BY DATE( DATE ) My problem is that when team B is empty, I am having irrelevant information .... it seems that it is returning all the combinations. I need a query that when team B is equal to 0, (this can occur in my scenario) I get only Team A - Team B (as 0) once.

    Read the article

  • mysql join with a "bounce" off a third table

    - by Enkay
    I have 3 mysql tables. companies with company_id and company_name products with product_id and company_id names with product_id, product_name and other info about the product I'm trying to output the product_name and the company_name in one query for a given product_id. Basically I need information from the names and companies tables and the link between them is the products table. How do I do a join that needs to "bounce" off a third table? Something like this but this obviously doesn't work : SELECT product_name, company_name FROM names LEFT OUTER JOIN companies ON (names.product_id = products.product_id and products.company_id = companies.company_id) WHERE product_id = '12345' Thanks!

    Read the article

  • group_concat on an empty join in MySQL

    - by Yossarian
    Hello, I've got the following problem: I have two tables: (simplified) +--------+ +-----------+ | User | | Role | +--------+ +-----------+ | ID<PK> | | ID <PK> | +--------+ | Name | +-----------+ and M:N relationship between them +-------------+ | User_Role | +-------------+ | User<FK> | | Role<FK> | +-------------+ I need to create a view, which selects me: User, and in one column, all of his Roles (this is done by group_concat). I've tried following: SELECT u.*, group_concat(r.Name separator ',') as Roles FROM User u LEFT JOIN User_Role ur ON ur.User=u.ID LEFT JOIN Role r ON ur.Role=r.ID GROUP BY u.ID; However, this works for an user with some defined roles. Users without role aren't returned. How can I modify the statement, to return me User with empty string in Roles column when User doesn't have any Role? Explanation: I'm passing the SQL data directly to a grid, which then formats itself, and it is easier for me to create slow and complicated view, than to format it in my code. I'm using MySQL

    Read the article

  • Oracle curcular join sometimes give duplicates, but sometimes does not

    - by Kaushik
    By mistake I wrote a query like this: select * from a,b,c where a.col=b.col and b.col2=c.col2 and c.col3=a.col4 So there is a circular join here. Now the thing is sometimes this query returns duplicate result, sometimes it returns unique(correct) results. I am trying to understand why it does not give duplicate results always. Also if circular joins are not allowed, how come Oracle does not throw an error. EDIT: This is the actual query. After reading ti carefully, I am not sure anymore if this is a circular join or not.It does not seem so...but why I get duplicates only sometime? select * from a,b,c,d where a.col=b.col and b.col=c.col and c.col2=d.col2 and d.col2 =a.col2

    Read the article

  • Broken count(*) after adding LEFT JOIN

    - by Iain Urquhart
    Since adding the LEFT JOIN to the query below, the count(*) has been returning some strange values, it seems to have added the total rows returned in the query to the 'level': SELECT `n`.*, exp_channel_titles.*, round((`n`.`rgt` - `n`.`lft` - 1) / 2, 0) AS childs, count(*) - 1 + (`n`.`lft` > 1) + 1 AS level, ((min(`p`.`rgt`) - `n`.`rgt` - (`n`.`lft` > 1)) / 2) > 0 AS lower, (((`n`.`lft` - max(`p`.`lft`) > 1))) AS upper FROM `exp_node_tree_6` `n` LEFT JOIN `exp_channel_titles` ON (`n`.`entry_id`=`exp_channel_titles`.`entry_id`), `exp_node_tree_6` `p`, `exp_node_tree_6` WHERE `n`.`lft` BETWEEN `p`.`lft` AND `p`.`rgt` AND ( `p`.`node_id` != `n`.`node_id` OR `n`.`lft` = 1 ) GROUP BY `n`.`node_id` ORDER BY `n`.`lft` I'm totally stumped... Thank you!

    Read the article

  • MySQL Join Comma Separated Field

    - by neeraj
    I have two tables. First Table is a batch table that contain comma separated student id in field "batch" batch -------------- id batch -------------- 1 1,2 2 3,4 Second Table is marks marks ---------------------- id studentid subject marks 1 1 English 50 2 2 English 40 3 3 English 70 4 1 Math 65 5 4 English 66 6 5 English 75 7 2 Math 55 How we can find those students of first batch id =1 who have scored more than 45 marks in English without using sub query. Problem i found to get this done using a single query is that we can not use IN as an association operator in JOIN statement What changes are required in below query to make it work? SELECT * FROM batch INNER JOIN marks ON marks.studentid IN(batch.batch) where batch.id = 1

    Read the article

  • C# LINQ join With Just One Row

    - by Soo
    I'm trying to make a query that grabs a single row from an SQL database and updates it. TableA AId AValue TableB BId AId BValue Ok, so TableA and TableB are linked by AId. I want to select a row in TableB based on AValue using a join. The following query is what I have and only grabs a value from TableB based on AId, I just don't know how to grab a row from TableB using AValue. I know you would need to use a join, but I'm not sure how to accomplish that. var row = DbObject.TableB.Single(x => x.AId == 1) row.BValue = 1; DbObject.SubmitChanges();

    Read the article

  • mysql select multiple rows in join

    - by julio
    Hi-- I have a simple mySQL problem-- I have two tables, one is a user's table, and one is a photos table (each user can upload multiple photos). I'd like to write a query to join these tables, so I can pull all photos associated with a user (up to a certain limit). However, when I do something obvious like this: SELECT *.a, *.b FROM user_table a JOIN photos_table b ON a.id = b.userid it returns a.id, a.name, a.email, a.address, b.id, b.userid, b.photo_title, b.location but it only returns a single photo. Is there a way to return something like: a.id, a.name, a.email, a.address, b.id, b.userid, b.photo_title, b.location, b.id2, b.photo_title2, b.location2 etc. . . for a given LIMIT of photos? Thanks for any ideas.

    Read the article

  • MySQL AND alternative for eatch table in a join

    - by Scott
    I have a simple join in a query however I need to have a condition on both of the tables "confirmed='yes'" but if one of the tables doesn't have any that match the query returns with no rows. Database: .----------parties----------. | id - party_id - confirmed | |---------------------------| | 1 1 yes | | 1 2 no | | 1 3 no | +---------------------------+ .-----------events----------. | id - event_id - confirmed | |---------------------------| | 1 1 no | +---------------------------+ Query: SELECT p.party_id, e.event_id FROM parties p LEFT JOIN events e ON p.id=e.id WHERE p.id = '1' AND p.party_id IN (1,2,3) AND e.event_id IN (1) AND p.confirmed='yes' AND e.confirmed='yes' It returns nothing but I want it to return party_id 1 with a empty event_id. I hope this make sense and I not missing anything, Thanks for your help!

    Read the article

  • MySQL AND alternative for each table in a join

    - by Scott
    I have a simple join in a query however I need to have a condition on both of the tables "confirmed='yes'" but if one of the tables doesn't have any that match the query returns with no rows. Database: .----------parties----------. | id - party_id - confirmed | |---------------------------| | 1 1 yes | | 1 2 no | | 1 3 no | +---------------------------+ .-----------events----------. | id - event_id - confirmed | |---------------------------| | 1 1 no | +---------------------------+ Query: SELECT p.party_id, e.event_id FROM parties p LEFT JOIN events e ON p.id=e.id WHERE p.id = '1' AND p.party_id IN (1,2,3) AND e.event_id IN (1) AND p.confirmed='yes' AND e.confirmed='yes' It returns nothing but I want it to return party_id 1 with a empty event_id. I hope this make sense and I not missing anything, Thanks for your help!

    Read the article

  • Plan Caching and Query Memory Part II (Hash Match) – When not to use stored procedure - Most common performance mistake SQL Server developers make.

    - by sqlworkshops
    SQL Server estimates Memory requirement at compile time, when stored procedure or other plan caching mechanisms like sp_executesql or prepared statement are used, the memory requirement is estimated based on first set of execution parameters. This is a common reason for spill over tempdb and hence poor performance. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Hash Match operations with examples. It is recommended to read Plan Caching and Query Memory Part I before this article which covers an introduction and Query memory for Sort. In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a query does not change significantly based on predicates.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts  Let’s create a Customer’s State table that has 99% of customers in NY and the rest 1% in WA.Customers table used in Part I of this article is also used here.To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'. --Example provided by www.sqlworkshops.com drop table CustomersState go create table CustomersState (CustomerID int primary key, Address char(200), State char(2)) go insert into CustomersState (CustomerID, Address) select CustomerID, 'Address' from Customers update CustomersState set State = 'NY' where CustomerID % 100 != 1 update CustomersState set State = 'WA' where CustomerID % 100 = 1 go update statistics CustomersState with fullscan go   Let’s create a stored procedure that joins customers with CustomersState table with a predicate on State. --Example provided by www.sqlworkshops.com create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1) end go  Let’s execute the stored procedure first with parameter value ‘WA’ – which will select 1% of data. set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' goThe stored procedure took 294 ms to complete.  The stored procedure was granted 6704 KB based on 8000 rows being estimated.  The estimated number of rows, 8000 is similar to actual number of rows 8000 and hence the memory estimation should be ok.  There was no Hash Warning in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Now let’s execute the stored procedure with parameter value ‘NY’ – which will select 99% of data. -Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  The stored procedure took 2922 ms to complete.   The stored procedure was granted 6704 KB based on 8000 rows being estimated.    The estimated number of rows, 8000 is way different from the actual number of rows 792000 because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘WA’ in our case. This underestimation will lead to spill over tempdb, resulting in poor performance.   There was Hash Warning (Recursion) in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Let’s recompile the stored procedure and then let’s first execute the stored procedure with parameter value ‘NY’.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts, www.sqlworkshops.com/webcasts for further details.   exec sp_recompile CustomersByState go --Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  Now the stored procedure took only 1046 ms instead of 2922 ms.   The stored procedure was granted 146752 KB of memory. The estimated number of rows, 792000 is similar to actual number of rows of 792000. Better performance of this stored procedure execution is due to better estimation of memory and avoiding spill over tempdb.   There was no Hash Warning in SQL Profiler.   Now let’s execute the stored procedure with parameter value ‘WA’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go  The stored procedure took 351 ms to complete, higher than the previous execution time of 294 ms.    This stored procedure was granted more memory (146752 KB) than necessary (6704 KB) based on parameter value ‘NY’ for estimation (792000 rows) instead of parameter value ‘WA’ for estimation (8000 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘NY’ in this case. This overestimation leads to poor performance of this Hash Match operation, it might also affect the performance of other concurrently executing queries requiring memory and hence overestimation is not recommended.     The estimated number of rows, 792000 is much more than the actual number of rows of 8000.  Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined data range.Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, recompile) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 297 ms and 1102 ms in line with previous optimal execution times.   The stored procedure with parameter value ‘WA’ has good estimation like before.   Estimated number of rows of 8000 is similar to actual number of rows of 8000.   The stored procedure with parameter value ‘NY’ also has good estimation and memory grant like before because the stored procedure was recompiled with current set of parameter values.  Estimated number of rows of 792000 is similar to actual number of rows of 792000.    The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.   There was no Hash Warning in SQL Profiler.   Let’s recreate the stored procedure with optimize for hint of ‘NY’. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, optimize for (@State = 'NY')) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 353 ms with parameter value ‘WA’, this is much slower than the optimal execution time of 294 ms we observed previously. This is because of overestimation of memory. The stored procedure with parameter value ‘NY’ has optimal execution time like before.   The stored procedure with parameter value ‘WA’ has overestimation of rows because of optimize for hint value of ‘NY’.   Unlike before, more memory was estimated to this stored procedure based on optimize for hint value ‘NY’.    The stored procedure with parameter value ‘NY’ has good estimation because of optimize for hint value of ‘NY’. Estimated number of rows of 792000 is similar to actual number of rows of 792000.   Optimal amount memory was estimated to this stored procedure based on optimize for hint value ‘NY’.   There was no Hash Warning in SQL Profiler.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.  Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >