Search Results

Search found 15803 results on 633 pages for 'self join'.

Page 1/633 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • In SQL, if we rename INNER JOIN as INTERSECT JOIN, LEFT OUTER JOIN as LEFT UNION JOIN, and FULL OUTE

    - by Jian Lin
    In SQL, the name Join gives an idea of "merging" or a sense of "union", making something bigger. But in fact, as in the other post http://stackoverflow.com/questions/2706051/in-sql-a-join-is-actually-an-intersection-and-it-is-also-a-linkage-or-a-sidew it turns out that a Join (Inner Join) is actually an Intersection. So if we think of Join = Inner Join = Intersect Join Left Outer Join = Left Union Join Full Outer Join = Full Union Join = Union Join then we always get a feel of what's happening, and maybe never forget what they are easily. In a way, we can think of Intersect as "making it less", therefore it is excluding something. That's why the name "Join" won't go with the idea of "Intersect". But in fact, both Intersect and Union can be thought of as: Union: bringing something together and merge them unconditionally. Intersect: bringing something together and merge them based on some condition. so the "bringing something together" is probably what "Join" is all about. It is like, Intersection is a "half glass of water" -- we can thinking of it as "excluding something" or as "bringing something together and accepting the common ones". So if the word "Intersect Join" is used, maybe a clear picture is there, and "Union Join" can be a clear picture too. Maybe the word "Inner Join" and "Outer Join" is very clear when we use SQL a lot. Somehow, the word "Outer" tends to give a feeling that it is "outside" and excluding something rather than a "Union".

    Read the article

  • Add collison detection to enemy sprites?

    - by xBroak
    i'd like to add the same collision detection used by the player sprite to the enemy sprites or 'creeps' ive added all the relevant code I can see yet collisons are still not being detected and handled, please find below the class, I have no idea what is wrong currently, the list of walls to collide with is 'wall_list' import pygame import pauseScreen as dm import re from pygame.sprite import Sprite from pygame import Rect, Color from random import randint, choice from vec2d import vec2d from simpleanimation import SimpleAnimation import displattxt black = (0,0,0) white = (255,255,255) blue = (0,0,255) green = (101,194,151) global currentEditTool currentEditTool = "Tree" global editMap editMap = False open('MapMaker.txt', 'w').close() def draw_background(screen, tile_img): screen.fill(black) img_rect = tile_img.get_rect() global rect rect = img_rect nrows = int(screen.get_height() / img_rect.height) + 1 ncols = int(screen.get_width() / img_rect.width) + 1 for y in range(nrows): for x in range(ncols): img_rect.topleft = (x * img_rect.width, y * img_rect.height) screen.blit(tile_img, img_rect) def changeTool(): if currentEditTool == "Tree": None elif currentEditTool == "Rock": None def pauseGame(): red = 255, 0, 0 green = 0,255, 0 blue = 0, 0,255 screen.fill(black) pygame.display.update() if editMap == False: choose = dm.dumbmenu(screen, [ 'Resume', 'Enable Map Editor', 'Quit Game'], 64,64,None,32,1.4,green,red) if choose == 0: print("hi") elif choose ==1: global editMap editMap = True elif choose ==2: print("bob") elif choose ==3: print("bob") elif choose ==4: print("bob") else: None else: choose = dm.dumbmenu(screen, [ 'Resume', 'Disable Map Editor', 'Quit Game'], 64,64,None,32,1.4,green,red) if choose == 0: print("Resume") elif choose ==1: print("Dis ME") global editMap editMap = False elif choose ==2: print("bob") elif choose ==3: print("bob") elif choose ==4: print("bob") else: None class Wall(pygame.sprite.Sprite): # Constructor function def __init__(self,x,y,width,height): pygame.sprite.Sprite.__init__(self) self.image = pygame.Surface([width, height]) self.image.fill(green) self.rect = self.image.get_rect() self.rect.y = y self.rect.x = x class insertTree(pygame.sprite.Sprite): def __init__(self,x,y,width,height, typ): pygame.sprite.Sprite.__init__(self) self.image = pygame.image.load("images/map/tree.png").convert() self.image.set_colorkey(white) self.rect = self.image.get_rect() self.rect.y = y self.rect.x = x class insertRock(pygame.sprite.Sprite): def __init__(self,x,y,width,height, typ): pygame.sprite.Sprite.__init__(self) self.image = pygame.image.load("images/map/rock.png").convert() self.image.set_colorkey(white) self.rect = self.image.get_rect() self.rect.y = y self.rect.x = x class Creep(pygame.sprite.Sprite): """ A creep sprite that bounces off walls and changes its direction from time to time. """ change_x=0 change_y=0 def __init__( self, screen, creep_image, explosion_images, field, init_position, init_direction, speed): """ Create a new Creep. screen: The screen on which the creep lives (must be a pygame Surface object, such as pygame.display) creep_image: Image (surface) object for the creep explosion_images: A list of image objects for the explosion animation. field: A Rect specifying the 'playing field' boundaries. The Creep will bounce off the 'walls' of this field. init_position: A vec2d or a pair specifying the initial position of the creep on the screen. init_direction: A vec2d or a pair specifying the initial direction of the creep. Must have an angle that is a multiple of 45 degres. speed: Creep speed, in pixels/millisecond (px/ms) """ Sprite.__init__(self) self.screen = screen self.speed = speed self.field = field self.rect = creep_image.get_rect() # base_image holds the original image, positioned to # angle 0. # image will be rotated. # self.base_image = creep_image self.image = self.base_image self.explosion_images = explosion_images # A vector specifying the creep's position on the screen # self.pos = vec2d(init_position) # The direction is a normalized vector # self.direction = vec2d(init_direction).normalized() self.state = Creep.ALIVE self.health = 15 def is_alive(self): return self.state in (Creep.ALIVE, Creep.EXPLODING) def changespeed(self,x,y): self.change_x+=x self.change_y+=y def update(self, time_passed, walls): """ Update the creep. time_passed: The time passed (in ms) since the previous update. """ if self.state == Creep.ALIVE: # Maybe it's time to change the direction ? # self._change_direction(time_passed) # Make the creep point in the correct direction. # Since our direction vector is in screen coordinates # (i.e. right bottom is 1, 1), and rotate() rotates # counter-clockwise, the angle must be inverted to # work correctly. # self.image = pygame.transform.rotate( self.base_image, -self.direction.angle) # Compute and apply the displacement to the position # vector. The displacement is a vector, having the angle # of self.direction (which is normalized to not affect # the magnitude of the displacement) # displacement = vec2d( self.direction.x * self.speed * time_passed, self.direction.y * self.speed * time_passed) self.pos += displacement # When the image is rotated, its size is changed. # We must take the size into account for detecting # collisions with the walls. # self.image_w, self.image_h = self.image.get_size() bounds_rect = self.field.inflate( -self.image_w, -self.image_h) if self.pos.x < bounds_rect.left: self.pos.x = bounds_rect.left self.direction.x *= -1 elif self.pos.x > bounds_rect.right: self.pos.x = bounds_rect.right self.direction.x *= -1 elif self.pos.y < bounds_rect.top: self.pos.y = bounds_rect.top self.direction.y *= -1 elif self.pos.y > bounds_rect.bottom: self.pos.y = bounds_rect.bottom self.direction.y *= -1 # collision detection old_x=bounds_rect.left new_x=old_x+self.direction.x bounds_rect.left = new_x # hit a wall? collide = pygame.sprite.spritecollide(self, walls, False) if collide: # yes bounds_rect.left=old_x old_y=self.pos.y new_y=old_y+self.direction.y self.pos.y = new_y collide = pygame.sprite.spritecollide(self, walls, False) if collide: # yes self.pos.y=old_y elif self.state == Creep.EXPLODING: if self.explode_animation.active: self.explode_animation.update(time_passed) else: self.state = Creep.DEAD self.kill() elif self.state == Creep.DEAD: pass #------------------ PRIVATE PARTS ------------------# # States the creep can be in. # # ALIVE: The creep is roaming around the screen # EXPLODING: # The creep is now exploding, just a moment before dying. # DEAD: The creep is dead and inactive # (ALIVE, EXPLODING, DEAD) = range(3) _counter = 0 def _change_direction(self, time_passed): """ Turn by 45 degrees in a random direction once per 0.4 to 0.5 seconds. """ self._counter += time_passed if self._counter > randint(400, 500): self.direction.rotate(45 * randint(-1, 1)) self._counter = 0 def _point_is_inside(self, point): """ Is the point (given as a vec2d) inside our creep's body? """ img_point = point - vec2d( int(self.pos.x - self.image_w / 2), int(self.pos.y - self.image_h / 2)) try: pix = self.image.get_at(img_point) return pix[3] > 0 except IndexError: return False def _decrease_health(self, n): """ Decrease my health by n (or to 0, if it's currently less than n) """ self.health = max(0, self.health - n) if self.health == 0: self._explode() def _explode(self): """ Starts the explosion animation that ends the Creep's life. """ self.state = Creep.EXPLODING pos = ( self.pos.x - self.explosion_images[0].get_width() / 2, self.pos.y - self.explosion_images[0].get_height() / 2) self.explode_animation = SimpleAnimation( self.screen, pos, self.explosion_images, 100, 300) global remainingCreeps remainingCreeps-=1 if remainingCreeps == 0: print("all dead") def draw(self): """ Blit the creep onto the screen that was provided in the constructor. """ if self.state == Creep.ALIVE: # The creep image is placed at self.pos. To allow for # smooth movement even when the creep rotates and the # image size changes, its placement is always # centered. # self.draw_rect = self.image.get_rect().move( self.pos.x - self.image_w / 2, self.pos.y - self.image_h / 2) self.screen.blit(self.image, self.draw_rect) # The health bar is 15x4 px. # health_bar_x = self.pos.x - 7 health_bar_y = self.pos.y - self.image_h / 2 - 6 self.screen.fill( Color('red'), (health_bar_x, health_bar_y, 15, 4)) self.screen.fill( Color('green'), ( health_bar_x, health_bar_y, self.health, 4)) elif self.state == Creep.EXPLODING: self.explode_animation.draw() elif self.state == Creep.DEAD: pass def mouse_click_event(self, pos): """ The mouse was clicked in pos. """ if self._point_is_inside(vec2d(pos)): self._decrease_health(3) #begin new player class Player(pygame.sprite.Sprite): change_x=0 change_y=0 frame = 0 def __init__(self,x,y): pygame.sprite.Sprite.__init__(self) # LOAD PLATER IMAGES # Set height, width self.images = [] for i in range(1,17): img = pygame.image.load("images/player/" + str(i)+".png").convert() #player images img.set_colorkey(white) self.images.append(img) self.image = self.images[0] self.rect = self.image.get_rect() self.rect.y = y self.rect.x = x self.health = 15 self.image_w, self.image_h = self.image.get_size() health_bar_x = self.rect.x - 7 health_bar_y = self.rect.y - self.image_h / 2 - 6 screen.fill( Color('red'), (health_bar_x, health_bar_y, 15, 4)) screen.fill( Color('green'), ( health_bar_x, health_bar_y, self.health, 4)) def changespeed(self,x,y): self.change_x+=x self.change_y+=y def _decrease_health(self, n): """ Decrease my health by n (or to 0, if it's currently less than n) """ self.health = max(0, self.health - n) if self.health == 0: self._explode() def update(self,walls): # collision detection old_x=self.rect.x new_x=old_x+self.change_x self.rect.x = new_x # hit a wall? collide = pygame.sprite.spritecollide(self, walls, False) if collide: # yes self.rect.x=old_x old_y=self.rect.y new_y=old_y+self.change_y self.rect.y = new_y collide = pygame.sprite.spritecollide(self, walls, False) if collide: # yes self.rect.y=old_y # right to left if self.change_x < 0: self.frame += 1 if self.frame > 3*4: self.frame = 0 # Grab the image, divide by 4 # every 4 frames. self.image = self.images[self.frame//4] # Move left to right. # images 4...7 instead of 0...3. if self.change_x > 0: self.frame += 1 if self.frame > 3*4: self.frame = 0 self.image = self.images[self.frame//4+4] if self.change_y > 0: self.frame += 1 if self.frame > 3*4: self.frame = 0 self.image = self.images[self.frame//4+4+4] if self.change_y < 0: self.frame += 1 if self.frame > 3*4: self.frame = 0 self.image = self.images[self.frame//4+4+4+4] score = 0 # initialize pyGame pygame.init() # 800x600 sized screen global screen screen = pygame.display.set_mode([800, 600]) screen.fill(black) #bg_tile_img = pygame.image.load('images/map/grass.png').convert_alpha() #draw_background(screen, bg_tile_img) #pygame.display.flip() # Set title pygame.display.set_caption('Test') #background = pygame.Surface(screen.get_size()) #background = background.convert() #background.fill(black) # Create the player player = Player( 50,50 ) player.rect.x=50 player.rect.y=50 movingsprites = pygame.sprite.RenderPlain() movingsprites.add(player) # Make the walls. (x_pos, y_pos, width, height) global wall_list wall_list=pygame.sprite.RenderPlain() wall=Wall(0,0,10,600) # left wall wall_list.add(wall) wall=Wall(10,0,790,10) # top wall wall_list.add(wall) #wall=Wall(10,200,100,10) # poke wall wall_list.add(wall) wall=Wall(790,0,10,600) #(x,y,thickness, height) wall_list.add(wall) wall=Wall(10,590,790,10) #(x,y,thickness, height) wall_list.add(wall) f = open('MapMaker.txt') num_lines = sum(1 for line in f) print(num_lines) lineCount = 0 with open("MapMaker.txt") as infile: for line in infile: f = open('MapMaker.txt') print(line) coords = line.split(',') #print(coords[0]) #print(coords[1]) #print(coords[2]) #print(coords[3]) #print(coords[4]) if "tree" in line: print("tree in") wall=insertTree(int(coords[0]),int(coords[1]), int(coords[2]),int(coords[3]),coords[4]) wall_list.add(wall) elif "rock" in line: print("rock in") wall=insertRock(int(coords[0]),int(coords[1]), int(coords[2]),int(coords[3]),coords[4] ) wall_list.add(wall) width = 20 height = 540 height = height - 48 for i in range(0,23): width = width + 32 name = insertTree(width,540,790,10,"tree") #wall_list.add(name) name = insertTree(width,height,690,10,"tree") #wall_list.add(name) CREEP_SPAWN_TIME = 200 # frames creep_spawn = CREEP_SPAWN_TIME clock = pygame.time.Clock() bg_tile_img = pygame.image.load('images/map/grass.png').convert() img_rect = bg_tile_img FIELD_RECT = Rect(50, 50, 700, 500) CREEP_FILENAMES = [ 'images/player/1.png', 'images/player/1.png', 'images/player/1.png'] N_CREEPS = 3 creep_images = [ pygame.image.load(filename).convert_alpha() for filename in CREEP_FILENAMES] explosion_img = pygame.image.load('images/map/tree.png').convert_alpha() explosion_images = [ explosion_img, pygame.transform.rotate(explosion_img, 90)] creeps = pygame.sprite.RenderPlain() done = False #bg_tile_img = pygame.image.load('images/map/grass.png').convert() #draw_background(screen, bg_tile_img) totalCreeps = 0 remainingCreeps = 3 while done == False: creep_images = pygame.image.load("images/player/1.png").convert() creep_images.set_colorkey(white) draw_background(screen, bg_tile_img) if len(creeps) != N_CREEPS: if totalCreeps < N_CREEPS: totalCreeps = totalCreeps + 1 print(totalCreeps) creeps.add( Creep( screen=screen, creep_image=creep_images, explosion_images=explosion_images, field=FIELD_RECT, init_position=( randint(FIELD_RECT.left, FIELD_RECT.right), randint(FIELD_RECT.top, FIELD_RECT.bottom)), init_direction=(choice([-1, 1]), choice([-1, 1])), speed=0.01)) for creep in creeps: creep.update(60,wall_list) creep.draw() for event in pygame.event.get(): if event.type == pygame.QUIT: done=True if event.type == pygame.KEYDOWN: if event.key == pygame.K_LEFT: player.changespeed(-2,0) creep.changespeed(-2,0) if event.key == pygame.K_RIGHT: player.changespeed(2,0) creep.changespeed(2,0) if event.key == pygame.K_UP: player.changespeed(0,-2) creep.changespeed(0,-2) if event.key == pygame.K_DOWN: player.changespeed(0,2) creep.changespeed(0,2) if event.key == pygame.K_ESCAPE: pauseGame() if event.key == pygame.K_1: global currentEditTool currentEditTool = "Tree" changeTool() if event.key == pygame.K_2: global currentEditTool currentEditTool = "Rock" changeTool() if event.type == pygame.KEYUP: if event.key == pygame.K_LEFT: player.changespeed(2,0) creep.changespeed(2,0) if event.key == pygame.K_RIGHT: player.changespeed(-2,0) creep.changespeed(-2,0) if event.key == pygame.K_UP: player.changespeed(0,2) creep.changespeed(0,2) if event.key == pygame.K_DOWN: player.changespeed(0,-2) creep.changespeed(0,-2) if event.type == pygame.MOUSEBUTTONDOWN and pygame.mouse.get_pressed()[0]: for creep in creeps: creep.mouse_click_event(pygame.mouse.get_pos()) if editMap == True: x,y = pygame.mouse.get_pos() if currentEditTool == "Tree": name = insertTree(x-10,y-25, 10 , 10, "tree") wall_list.add(name) wall_list.draw(screen) f = open('MapMaker.txt', "a+") image = pygame.image.load("images/map/tree.png").convert() screen.blit(image, (30,10)) pygame.display.flip() f.write(str(x) + "," + str(y) + ",790,10, tree\n") #f.write("wall=insertTree(" + str(x) + "," + str(y) + ",790,10)\nwall_list.add(wall)\n") elif currentEditTool == "Rock": name = insertRock(x-10,y-25, 10 , 10,"rock") wall_list.add(name) wall_list.draw(screen) f = open('MapMaker.txt', "a+") f.write(str(x) + "," + str(y) + ",790,10,rock\n") #f.write("wall=insertRock(" + str(x) + "," + str(y) + ",790,10)\nwall_list.add(wall)\n") else: None #pygame.display.flip() player.update(wall_list) movingsprites.draw(screen) wall_list.draw(screen) pygame.display.flip() clock.tick(60) pygame.quit()

    Read the article

  • add collision detection to sprite?

    - by xBroak
    bassically im trying to add collision detection to the sprite below, using the following: self.rect = bounds_rect collide = pygame.sprite.spritecollide(self, wall_list, False) if collide: # yes print("collide") However it seems that when the collide is triggered it continuously prints 'collide' over and over when instead i want them to simply not be able to walk through the object, any help? def update(self, time_passed): """ Update the creep. time_passed: The time passed (in ms) since the previous update. """ if self.state == Creep.ALIVE: # Maybe it's time to change the direction ? # self._change_direction(time_passed) # Make the creep point in the correct direction. # Since our direction vector is in screen coordinates # (i.e. right bottom is 1, 1), and rotate() rotates # counter-clockwise, the angle must be inverted to # work correctly. # self.image = pygame.transform.rotate( self.base_image, -self.direction.angle) # Compute and apply the displacement to the position # vector. The displacement is a vector, having the angle # of self.direction (which is normalized to not affect # the magnitude of the displacement) # displacement = vec2d( self.direction.x * self.speed * time_passed, self.direction.y * self.speed * time_passed) self.pos += displacement # When the image is rotated, its size is changed. # We must take the size into account for detecting # collisions with the walls. # self.image_w, self.image_h = self.image.get_size() global bounds_rect bounds_rect = self.field.inflate( -self.image_w, -self.image_h) if self.pos.x < bounds_rect.left: self.pos.x = bounds_rect.left self.direction.x *= -1 elif self.pos.x > bounds_rect.right: self.pos.x = bounds_rect.right self.direction.x *= -1 elif self.pos.y < bounds_rect.top: self.pos.y = bounds_rect.top self.direction.y *= -1 elif self.pos.y > bounds_rect.bottom: self.pos.y = bounds_rect.bottom self.direction.y *= -1 self.rect = bounds_rect collide = pygame.sprite.spritecollide(self, wall_list, False) if collide: # yes print("collide") elif self.state == Creep.EXPLODING: if self.explode_animation.active: self.explode_animation.update(time_passed) else: self.state = Creep.DEAD self.kill() elif self.state == Creep.DEAD: pass #------------------ PRIVATE PARTS ------------------# # States the creep can be in. # # ALIVE: The creep is roaming around the screen # EXPLODING: # The creep is now exploding, just a moment before dying. # DEAD: The creep is dead and inactive # (ALIVE, EXPLODING, DEAD) = range(3) _counter = 0 def _change_direction(self, time_passed): """ Turn by 45 degrees in a random direction once per 0.4 to 0.5 seconds. """ self._counter += time_passed if self._counter > randint(400, 500): self.direction.rotate(45 * randint(-1, 1)) self._counter = 0 def _point_is_inside(self, point): """ Is the point (given as a vec2d) inside our creep's body? """ img_point = point - vec2d( int(self.pos.x - self.image_w / 2), int(self.pos.y - self.image_h / 2)) try: pix = self.image.get_at(img_point) return pix[3] > 0 except IndexError: return False def _decrease_health(self, n): """ Decrease my health by n (or to 0, if it's currently less than n) """ self.health = max(0, self.health - n) if self.health == 0: self._explode() def _explode(self): """ Starts the explosion animation that ends the Creep's life. """ self.state = Creep.EXPLODING pos = ( self.pos.x - self.explosion_images[0].get_width() / 2, self.pos.y - self.explosion_images[0].get_height() / 2) self.explode_animation = SimpleAnimation( self.screen, pos, self.explosion_images, 100, 300) global remainingCreeps remainingCreeps-=1 if remainingCreeps == 0: print("all dead")

    Read the article

  • If Inner Join can be thought of as Cross Join but filtering out the records satisfying the condition

    - by Jian Lin
    If an Inner Join can be thought of as a cross join and then getting the records that satisfy the condition, then a LEFT OUTER JOIN can be thought of as that, plus ONE record on the left table that doesn't satisfy the condition. In other words, it is not a cross join that "goes easy" on the left records (even when the condition is not satisfied), because then the left record can appear many times (as many as how many records there are in the right table). So the LEFT OUTER JOIN is the Cross JOIN with the records satisfying the condition, plus ONE record from the LEFT TABLE that doesn't satisfy the condition.

    Read the article

  • Thinking of an Inner Join as a Cross Join and then satisfying some condition(s)?

    - by Jian Lin
    It seems like the safest way to think of an Inner Join is to think of it as a Cross Join and then satisfying some condition(s)? Because the equi-join can be obvious, but the non-equi-join can be a bit confusing. But if we always use the Cross Join, and then filter out the ones satisfying the condition, then we get the resulting table. In other words, we can always analyze it by using the first record on the left table, and then go through every single records on the right, and then repeat that for 2nd record on the left, and for the 3rd, 4th, ... etc. So in our mind, we can analyze it using this way, and it is like O(n^2), although what happens in the DBMS maybe that it is a lot faster (when an index is present). Is there another good way to think of it besides this method?

    Read the article

  • It seems like the safest way to think of an Inner Join is to think of it as a Cross Join and then sa

    - by Jian Lin
    It seems like the safest way to think of an Inner Join is to think of it as a Cross Join and then satisfying some condition(s)? Because the equi-join can be obvious, but the non-equi-join can be a bit confusing. But if we always use the Cross Join, and then filter out the ones satisfying the condition, then we get the resulting table. In other words, we can always analyze it by using the first record on the left table, and then go through every single records on the right, and then repeat that for 2nd record on the left, and for the 3rd, 4th, ... etc. So in our mind, we can analyze it using this way, and it is like O(n^2), although what happens in the DBMS maybe that it is a lot faster (when an index is present). Is there another good way to think of it besides this method?

    Read the article

  • SQL – Difference Between INNER JOIN and JOIN

    - by Pinal Dave
    Here is the follow up question to my earlier question SQL – Difference between != and Operator <> used for NOT EQUAL TO Operation. There was a pretty good discussion about this subject earlier and lots of people participated with their opinion. Though the answer was very simple but the conversation was indeed delightful and was indeed very informative. In this blog post I have another following up question to all of you. What is the difference between INNER JOIN and JOIN? If you are working with database you will find developers use above both the kinds of the joins in their SQL Queries. Here is the quick example of the same. Query using INNER JOIN SELECT * FROM Table1 INNER JOIN  Table2 ON Table1.Col1 = Table2.Col1 Query using JOIN SELECT * FROM Table1 JOIN  Table2 ON Table1.Col1 = Table2.Col1 The question is what is the difference between above two syntax. Here is the answer – They are equal to each other. There is absolutely no difference between them. They are equal in performance as well as implementation. JOIN is actually shorter version of INNER JOIN. Personally I prefer to write INNER JOIN because it is much cleaner to read and it avoids any confusion if there is related to JOIN. For example if users had written INNER JOIN instead of JOIN there would have been no confusion in mind and hence there was no need to have original question. Here is the question back to you - Which one of the following syntax do you use when you are inner joining two tables – INNER JOIN or JOIN? and Why? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • School vs Self-Taught [duplicate]

    - by Joan Venge
    This question already has an answer here: Do I need a degree in Computer Science to get a junior Programming job? [closed] 8 answers Do you think university is a good learning environment or is it better to be autodidact? [closed] 3 answers Do you think formal education is necessary to gain strong programming skills? There are a lot of jobs that aren't programming but involves programming, such as tech artists in games, fx tds in film for example. I see similar patterns in the people I work where the best ones I have seen were self-taught, because of being artists primarily. But I also see that while the software, programming knowledge is varied and deep, hardware knowledge is very basic, including me, again due to lack of formal education. But I also work with a lot of programmers who possess both skills in general (software and hardware). Do you think it's necessary to have a formal education to have great programming skills? Would you think less of someone if he didn't have a degree in computer science, or software engineering, etc in terms of job opportunities? Would you trust him to do a software engineering job, i.e. writing a complex tool? Basically I feel the self-taught programmer doesn't know a lot of things, i.e. not knowing a particular pattern or a particular language, etc. But I find that the ability to think outside the box much more powerful. As "pure" programmers what's your take on it?

    Read the article

  • SQL Server 2005 RIGHT OUTER JOIN not working

    - by CheeseConQueso
    I'm looking up access logs for specific courses. I need to show all the courses even if they don't exist in the logs table. Hence the outer join.... but after trying (presumably) all of the variations of LEFT OUTER, RIGHT OUTER, INNER and placement of the tables within the SQL code, I couldn't get my result. Here's what I am running: SELECT (a.first_name+' '+a.last_name) instructor, c.course_id, COUNT(l.access_date) course_logins, a.logins system_logins, MAX(l.access_date) last_course_login, a.last_login last_system_login FROM lsn_logs l RIGHT OUTER JOIN courses c ON l.course_id = c.course_id, accounts a WHERE l.object_id = 'LOGIN' AND c.course_type = 'COURSE' AND c.course_id NOT LIKE '%TEST%' AND a.account_rights > 2 AND l.user_id = a.username AND ((a.first_name+' '+a.last_name) = c.instructor) GROUP BY c.course_id, a.first_name, a.last_name, a.last_login, a.logins, c.instructor ORDER BY a.last_name, a.first_name, c.course_id, course_logins DESC Is it something in the WHERE clause that's preventing me from getting course_id's that don't exist in lsn_logs? Is it the way I'm joining the tables? Again, in short, I want all course_id's regardless of their existence in lsn_logs.

    Read the article

  • Refactoring a Single Rails Model with large methods & long join queries trying to do everything

    - by Kelseydh
    I have a working Ruby on Rails Model that I suspect is inefficient, hard to maintain, and full of unnecessary SQL join queries. I want to optimize and refactor this Model (Quiz.rb) to comply with Rails best practices, but I'm not sure how I should do it. The Rails app is a game that has Missions with many Stages. Users complete Stages by answering Questions that have correct or incorrect Answers. When a User tries to complete a stage by answering questions, the User gets a Quiz entry with many Attempts. Each Attempt records an Answer submitted for that Question within the Stage. A user completes a stage or mission by getting every Attempt correct, and their progress is tracked by adding a new entry to the UserMission & UserStage join tables. All of these features work, but unfortunately the Quiz.rb Model has been twisted to handle almost all of it exclusively. The callbacks began at 'Quiz.rb', and because I wasn't sure how to leave the Quiz Model during a multi-model update, I resorted to using Rails Console to have the @quiz instance variable via self.some_method do all the heavy lifting to retrieve every data value for the game's business logic; resulting in large extended join queries that "dance" all around the Database schema. The Quiz.rb Model that Smells: class Quiz < ActiveRecord::Base belongs_to :user has_many :attempts, dependent: :destroy before_save :check_answer before_save :update_user_mission_and_stage accepts_nested_attributes_for :attempts, :reject_if => lambda { |a| a[:answer_id].blank? }, :allow_destroy => true #Checks every answer within each quiz, adding +1 for each correct answer #within a stage quiz, and -1 for each incorrect answer def check_answer stage_score = 0 self.attempts.each do |attempt| if attempt.answer.correct? == true stage_score += 1 elsif attempt.answer.correct == false stage_score - 1 end end stage_score end def winner return true end def update_user_mission_and_stage ####### #Step 1: Checks if UserMission exists, finds or creates one. #if no UserMission for the current mission exists, creates a new UserMission if self.user_has_mission? == false @user_mission = UserMission.new(user_id: self.user.id, mission_id: self.current_stage.mission_id, available: true) @user_mission.save else @user_mission = self.find_user_mission end ####### #Step 2: Checks if current UserStage exists, stops if true to prevent duplicate entry if self.user_has_stage? @user_mission.save return true else ####### ##Step 3: if step 2 returns false: ##Initiates UserStage creation instructions #checks for winner (winner actions need to be defined) if they complete last stage of last mission for a given orientation if self.passed? && self.is_last_stage? && self.is_last_mission? create_user_stage_and_update_user_mission self.winner #NOTE: The rest are the same, but specify conditions that are available to add badges or other actions upon those conditions occurring: ##if user completes first stage of a mission elsif self.passed? && self.is_first_stage? && self.is_first_mission? create_user_stage_and_update_user_mission #creates user badge for finishing first stage of first mission self.user.add_badge(5) self.user.activity_logs.create(description: "granted first-stage badge", type_event: "badge", value: "first-stage") #If user completes last stage of a given mission, creates a new UserMission elsif self.passed? && self.is_last_stage? && self.is_first_mission? create_user_stage_and_update_user_mission #creates user badge for finishing first mission self.user.add_badge(6) self.user.activity_logs.create(description: "granted first-mission badge", type_event: "badge", value: "first-mission") elsif self.passed? create_user_stage_and_update_user_mission else self.passed? == false return true end end end #Creates a new UserStage record in the database for a successful Quiz question passing def create_user_stage_and_update_user_mission @nu_stage = @user_mission.user_stages.new(user_id: self.user.id, stage_id: self.current_stage.id) @nu_stage.save @user_mission.save self.user.add_points(50) end #Boolean that defines passing a stage as answering every question in that stage correct def passed? self.check_answer >= self.number_of_questions end #Returns the number of questions asked for that stage's quiz def number_of_questions self.attempts.first.answer.question.stage.questions.count end #Returns the current_stage for the Quiz, routing through 1st attempt in that Quiz def current_stage self.attempts.first.answer.question.stage end #Gives back the position of the stage relative to its mission. def stage_position self.attempts.first.answer.question.stage.position end #will find the user_mission for the current user and stage if it exists def find_user_mission self.user.user_missions.find_by_mission_id(self.current_stage.mission_id) end #Returns true if quiz was for the last stage within that mission #helpful for triggering actions related to a user completing a mission def is_last_stage? self.stage_position == self.current_stage.mission.stages.last.position end #Returns true if quiz was for the first stage within that mission #helpful for triggering actions related to a user completing a mission def is_first_stage? self.stage_position == self.current_stage.mission.stages_ordered.first.position end #Returns true if current user has a UserMission for the current stage def user_has_mission? self.user.missions.ids.include?(self.current_stage.mission.id) end #Returns true if current user has a UserStage for the current stage def user_has_stage? self.user.stages.include?(self.current_stage) end #Returns true if current user is on the last mission based on position within a given orientation def is_first_mission? self.user.missions.first.orientation.missions.by_position.first.position == self.current_stage.mission.position end #Returns true if current user is on the first stage & mission of a given orientation def is_last_mission? self.user.missions.first.orientation.missions.by_position.last.position == self.current_stage.mission.position end end My Question Currently my Rails server takes roughly 500ms to 1 sec to process single @quiz.save action. I am confident that the slowness here is due to sloppy code, not bad Database ERD design. What does a better solution look like? And specifically: Should I use join queries to retrieve values like I did here, or is it better to instantiate new objects within the model instead? Or am I missing a better solution? How should update_user_mission_and_stage be refactored to follow best practices? Relevant Code for Reference: quizzes_controller.rb w/ Controller Route Initiating Callback: class QuizzesController < ApplicationController before_action :find_stage_and_mission before_action :find_orientation before_action :find_question def show end def create @user = current_user @quiz = current_user.quizzes.new(quiz_params) if @quiz.save if @quiz.passed? if @mission.next_mission.nil? && @stage.next_stage.nil? redirect_to root_path, notice: "Congratulations, you have finished the last mission!" elsif @stage.next_stage.nil? redirect_to [@mission.next_mission, @mission.first_stage], notice: "Correct! Time for Mission #{@mission.next_mission.position}", info: "Starting next mission" else redirect_to [@mission, @stage.next_stage], notice: "Answer Correct! You passed the stage!" end else redirect_to [@mission, @stage], alert: "You didn't get every question right, please try again." end else redirect_to [@mission, @stage], alert: "Sorry. We were unable to save your answer. Please contact the admministrator." end @questions = @stage.questions.all end private def find_stage_and_mission @stage = Stage.find(params[:stage_id]) @mission = @stage.mission end def find_question @question = @stage.questions.find_by_id params[:id] end def quiz_params params.require(:quiz).permit(:user_id, :attempt_id, {attempts_attributes: [:id, :quiz_id, :answer_id]}) end def find_orientation @orientation = @mission.orientation @missions = @orientation.missions.by_position end end Overview of Relevant ERD Database Relationships: Mission - Stage - Question - Answer - Attempt <- Quiz <- User Mission - UserMission <- User Stage - UserStage <- User Other Models: Mission.rb class Mission < ActiveRecord::Base belongs_to :orientation has_many :stages has_many :user_missions, dependent: :destroy has_many :users, through: :user_missions #SCOPES scope :by_position, -> {order(position: :asc)} def stages_ordered stages.order(:position) end def next_mission self.orientation.missions.find_by_position(self.position.next) end def first_stage next_mission.stages_ordered.first end end Stage.rb: class Stage < ActiveRecord::Base belongs_to :mission has_many :questions, dependent: :destroy has_many :user_stages, dependent: :destroy has_many :users, through: :user_stages accepts_nested_attributes_for :questions, reject_if: :all_blank, allow_destroy: true def next_stage self.mission.stages.find_by_position(self.position.next) end end Question.rb class Question < ActiveRecord::Base belongs_to :stage has_many :answers, dependent: :destroy accepts_nested_attributes_for :answers, :reject_if => lambda { |a| a[:body].blank? }, :allow_destroy => true end Answer.rb: class Answer < ActiveRecord::Base belongs_to :question has_many :attempts, dependent: :destroy end Attempt.rb: class Attempt < ActiveRecord::Base belongs_to :answer belongs_to :quiz end User.rb: class User < ActiveRecord::Base belongs_to :school has_many :activity_logs has_many :user_missions, dependent: :destroy has_many :missions, through: :user_missions has_many :user_stages, dependent: :destroy has_many :stages, through: :user_stages has_many :orientations, through: :school has_many :quizzes, dependent: :destroy has_many :attempts, through: :quizzes def latest_stage_position self.user_missions.last.user_stages.last.stage.position end end UserMission.rb class UserMission < ActiveRecord::Base belongs_to :user belongs_to :mission has_many :user_stages, dependent: :destroy end UserStage.rb class UserStage < ActiveRecord::Base belongs_to :user belongs_to :stage belongs_to :user_mission end

    Read the article

  • How to join data frames in R (inner, outer, left, right)?

    - by Dan Goldstein
    Given two data frames df1 = data.frame(CustomerId=c(1:6),Product=c(rep("Toaster",3),rep("Radio",3))) df2 = data.frame(CustomerId=c(2,4,6),State=c(rep("Alabama",2),rep("Ohio",1))) > df1 CustomerId Product 1 Toaster 2 Toaster 3 Toaster 4 Radio 5 Radio 6 Radio > df2 CustomerId State 2 Alabama 4 Alabama 6 Ohio How can I do database style, i.e., sql style, joins? That is, how do I get: An inner join of df1 and df1 An outer join of df1 and df2 A left outer join of df1 and df2 A right outer join of df1 and df2 P.S. IKT-JARQ (I Know This - Just Adding R Questions) Extra credit: How can I do a sql style select statement?

    Read the article

  • Merge Join component sorted outputs [SSIS]

    - by jamiet
    One question that I have been asked a few times of late in regard to performance tuning SSIS data flows is this: Why isn’t the Merge Join output sorted (i.e.IsSorted=True)? This is a fair question. After all both of the Merge Join inputs are sorted, hence why wouldn’t the output be sorted as well? Well here’s a little secret, the Merge Join output IS sorted! There’s a caveat though – it is only under certain circumstances and SSIS itself doesn’t do a good job of informing you of it. Let’s take a look at an example. Here we have a dataflow that consumes data from the [AdventureWorks2008].[Sales].[SalesOrderHeader] & [AdventureWorks2008].[Sales].[SalesOrderDetail] tables then joins them using a Merge Join component: Let’s take a look inside the editor of the Merge Join: We are joining on the [SalesOrderId] field (which is what the two inputs just happen to be sorted upon). We are also putting [SalesOrderHeader].[SalesOrderId] into the output. Believe it or not the output from this Merge Join component is sorted (i.e. has IsSorted=True) but unfortunately the Merge Join component does not have an Advanced Editor hence it is hidden away from us. There are a couple of ways to prove to you that is the case; I could open up the package XML inside the .dtsx file and show you the metadata but there is an easier way than that – I can attach a Sort component to the output. Take a look: Notice that the Sort component is attempting to sort on the [SalesOrderId] column. This gives us the following warning: Validation warning. DFT Get raw data: {992B7C9A-35AD-47B9-A0B0-637F7DDF93EB}: The data is already sorted as specified so the transform can be removed. The warning proves that the output from the Merge Join is sorted! It must be noted that the Merge Join output will only have IsSorted=True if at least one of the join columns is included in the output. So there you go, the Merge Join component can indeed produce a sorted output and that’s very useful in order to avoid unnecessary expensive Sort operations downstream. Hope this is useful to someone out there! @Jamiet  P.S. Thank you to Bob Bojanic on the SSIS product team who pointed this out to me!

    Read the article

  • Merge Join component sorted outputs [SSIS]

    - by jamiet
    One question that I have been asked a few times of late in regard to performance tuning SSIS data flows is this: Why isn’t the Merge Join output sorted (i.e.IsSorted=True)? This is a fair question. After all both of the Merge Join inputs are sorted, hence why wouldn’t the output be sorted as well? Well here’s a little secret, the Merge Join output IS sorted! There’s a caveat though – it is only under certain circumstances and SSIS itself doesn’t do a good job of informing you of it. Let’s take a look at an example. Here we have a dataflow that consumes data from the [AdventureWorks2008].[Sales].[SalesOrderHeader] & [AdventureWorks2008].[Sales].[SalesOrderDetail] tables then joins them using a Merge Join component: Let’s take a look inside the editor of the Merge Join: We are joining on the [SalesOrderId] field (which is what the two inputs just happen to be sorted upon). We are also putting [SalesOrderHeader].[SalesOrderId] into the output. Believe it or not the output from this Merge Join component is sorted (i.e. has IsSorted=True) but unfortunately the Merge Join component does not have an Advanced Editor hence it is hidden away from us. There are a couple of ways to prove to you that is the case; I could open up the package XML inside the .dtsx file and show you the metadata but there is an easier way than that – I can attach a Sort component to the output. Take a look: Notice that the Sort component is attempting to sort on the [SalesOrderId] column. This gives us the following warning: Validation warning. DFT Get raw data: {992B7C9A-35AD-47B9-A0B0-637F7DDF93EB}: The data is already sorted as specified so the transform can be removed. The warning proves that the output from the Merge Join is sorted! It must be noted that the Merge Join output will only have IsSorted=True if at least one of the join columns is included in the output. So there you go, the Merge Join component can indeed produce a sorted output and that’s very useful in order to avoid unnecessary expensive Sort operations downstream. Hope this is useful to someone out there! @Jamiet  P.S. Thank you to Bob Bojanic on the SSIS product team who pointed this out to me!

    Read the article

  • Basics of Join Predicate Pushdown in Oracle

    - by Maria Colgan
    Happy New Year to all of our readers! We hope you all had a great holiday season. We start the new year by continuing our series on Optimizer transformations. This time it is the turn of Predicate Pushdown. I would like to thank Rafi Ahmed for the content of this blog.Normally, a view cannot be joined with an index-based nested loop (i.e., index access) join, since a view, in contrast with a base table, does not have an index defined on it. A view can only be joined with other tables using three methods: hash, nested loop, and sort-merge joins. Introduction The join predicate pushdown (JPPD) transformation allows a view to be joined with index-based nested-loop join method, which may provide a more optimal alternative. In the join predicate pushdown transformation, the view remains a separate query block, but it contains the join predicate, which is pushed down from its containing query block into the view. The view thus becomes correlated and must be evaluated for each row of the outer query block. These pushed-down join predicates, once inside the view, open up new index access paths on the base tables inside the view; this allows the view to be joined with index-based nested-loop join method, thereby enabling the optimizer to select an efficient execution plan. The join predicate pushdown transformation is not always optimal. The join predicate pushed-down view becomes correlated and it must be evaluated for each outer row; if there is a large number of outer rows, the cost of evaluating the view multiple times may make the nested-loop join suboptimal, and therefore joining the view with hash or sort-merge join method may be more efficient. The decision whether to push down join predicates into a view is determined by evaluating the costs of the outer query with and without the join predicate pushdown transformation under Oracle's cost-based query transformation framework. The join predicate pushdown transformation applies to both non-mergeable views and mergeable views and to pre-defined and inline views as well as to views generated internally by the optimizer during various transformations. The following shows the types of views on which join predicate pushdown is currently supported. UNION ALL/UNION view Outer-joined view Anti-joined view Semi-joined view DISTINCT view GROUP-BY view Examples Consider query A, which has an outer-joined view V. The view cannot be merged, as it contains two tables, and the join between these two tables must be performed before the join between the view and the outer table T4. A: SELECT T4.unique1, V.unique3 FROM T_4K T4,            (SELECT T10.unique3, T10.hundred, T10.ten             FROM T_5K T5, T_10K T10             WHERE T5.unique3 = T10.unique3) VWHERE T4.unique3 = V.hundred(+) AND       T4.ten = V.ten(+) AND       T4.thousand = 5; The following shows the non-default plan for query A generated by disabling join predicate pushdown. When query A undergoes join predicate pushdown, it yields query B. Note that query B is expressed in a non-standard SQL and shows an internal representation of the query. B: SELECT T4.unique1, V.unique3 FROM T_4K T4,           (SELECT T10.unique3, T10.hundred, T10.ten             FROM T_5K T5, T_10K T10             WHERE T5.unique3 = T10.unique3             AND T4.unique3 = V.hundred(+)             AND T4.ten = V.ten(+)) V WHERE T4.thousand = 5; The execution plan for query B is shown below. In the execution plan BX, note the keyword 'VIEW PUSHED PREDICATE' indicates that the view has undergone the join predicate pushdown transformation. The join predicates (shown here in red) have been moved into the view V; these join predicates open up index access paths thereby enabling index-based nested-loop join of the view. With join predicate pushdown, the cost of query A has come down from 62 to 32.  As mentioned earlier, the join predicate pushdown transformation is cost-based, and a join predicate pushed-down plan is selected only when it reduces the overall cost. Consider another example of a query C, which contains a view with the UNION ALL set operator.C: SELECT R.unique1, V.unique3 FROM T_5K R,            (SELECT T1.unique3, T2.unique1+T1.unique1             FROM T_5K T1, T_10K T2             WHERE T1.unique1 = T2.unique1             UNION ALL             SELECT T1.unique3, T2.unique2             FROM G_4K T1, T_10K T2             WHERE T1.unique1 = T2.unique1) V WHERE R.unique3 = V.unique3 and R.thousand < 1; The execution plan of query C is shown below. In the above, 'VIEW UNION ALL PUSHED PREDICATE' indicates that the UNION ALL view has undergone the join predicate pushdown transformation. As can be seen, here the join predicate has been replicated and pushed inside every branch of the UNION ALL view. The join predicates (shown here in red) open up index access paths thereby enabling index-based nested loop join of the view. Consider query D as an example of join predicate pushdown into a distinct view. We have the following cardinalities of the tables involved in query D: Sales (1,016,271), Customers (50,000), and Costs (787,766).  D: SELECT C.cust_last_name, C.cust_city FROM customers C,            (SELECT DISTINCT S.cust_id             FROM sales S, costs CT             WHERE S.prod_id = CT.prod_id and CT.unit_price > 70) V WHERE C.cust_state_province = 'CA' and C.cust_id = V.cust_id; The execution plan of query D is shown below. As shown in XD, when query D undergoes join predicate pushdown transformation, the expensive DISTINCT operator is removed and the join is converted into a semi-join; this is possible, since all the SELECT list items of the view participate in an equi-join with the outer tables. Under similar conditions, when a group-by view undergoes join predicate pushdown transformation, the expensive group-by operator can also be removed. With the join predicate pushdown transformation, the elapsed time of query D came down from 63 seconds to 5 seconds. Since distinct and group-by views are mergeable views, the cost-based transformation framework also compares the cost of merging the view with that of join predicate pushdown in selecting the most optimal execution plan. Summary We have tried to illustrate the basic ideas behind join predicate pushdown on different types of views by showing example queries that are quite simple. Oracle can handle far more complex queries and other types of views not shown here in the examples. Again many thanks to Rafi Ahmed for the content of this blog post.

    Read the article

  • Detecting walls or floors in pygame

    - by Serial
    I am trying to make bullets bounce of walls, but I can't figure out how to correctly do the collision detection. What I am currently doing is iterating through all the solid blocks and if the bullet hits the bottom, top or sides, its vector is adjusted accordingly. However, sometimes when I shoot, the bullet doesn't bounce, I think it's when I shoot at a border between two blocks. Here is the update method for my Bullet class: def update(self, dt): if self.can_bounce: #if the bullet hasnt bounced find its vector using the mousclick pos and player pos speed = -10. range = 200 distance = [self.mouse_x - self.player[0], self.mouse_y - self.player[1]] norm = math.sqrt(distance[0] ** 2 + distance[1] ** 2) direction = [distance[0] / norm, distance[1 ] / norm] bullet_vector = [direction[0] * speed, direction[1] * speed] self.dx = bullet_vector[0] self.dy = bullet_vector[1] #check each block for collision for block in self.game.solid_blocks: last = self.rect.copy() if self.rect.colliderect(block): topcheck = self.rect.top < block.rect.bottom and self.rect.top > block.rect.top bottomcheck = self.rect.bottom > block.rect.top and self.rect.bottom < block.rect.bottom rightcheck = self.rect.right > block.rect.left and self.rect.right < block.rect.right leftcheck = self.rect.left < block.rect.right and self.rect.left > block.rect.left each test tests if it hit the top bottom left or right side of the block its colliding with if self.can_bounce: if topcheck: self.rect = last self.dy *= -1 self.can_bounce = False print "top" if bottomcheck: self.rect = last self.dy *= -1 #Bottom check self.can_bounce = False print "bottom" if rightcheck: self.rect = last self.dx *= -1 #right check self.can_bounce = False print "right" if leftcheck: self.rect = last self.dx *= -1 #left check self.can_bounce = False print "left" else: # if it has already bounced and colliding again kill it self.kill() for enemy in self.game.enemies_list: if self.rect.colliderect(enemy): self.kill() #update position self.rect.x -= self.dx self.rect.y -= self.dy This definitely isn't the best way to do it but I can't think of another way. If anyone has done this or can help that would be awesome!

    Read the article

  • SQL SERVER- Differences Between Left Join and Left Outer Join

    - by pinaldave
    There are a few questions that I had decided not to discuss on this blog because I think they are very simple and many of us know it. Many times, I even receive not-so positive notes from several readers when I am writing something simple. However, assuming that we know all and beginners should know everything is not the right attitude. Since day 1, I have been keeping a small journal regarding questions that I receive in this blog. There are around 200+ questions I receive every day through emails, comments and occasional phone calls. Yesterday, I received a comment with the following question: What are the differences between Left Join and Left Outer Join? Click here to read original comment. This question has triggered the threshold of receiving the same question repeatedly. Here is the answer: There is absolutely no difference between LEFT JOIN and LEFT OUTER JOIN. The same is true for RIGHT JOIN and RIGHT OUTER JOIN. When you use LEFT JOIN keyword in SQL Server, it means LEFT OUTER JOIN only. I have already written in-depth visual diagram discussing the JOINs. I encourage all of you to read the article for further understanding of the JOINs: Read Introduction to JOINs – Basic of JOINs Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Joins, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Joining on a common table, how do you get a FULL OUTER JOIN to expand on another table?

    - by stimpy77
    I've scoured StackOverflow and Google for an answer to this problem. I'm trying to create a Microsot SQL Server 2008 view. Not a stored procedure. Not a function. Just a query (i.e. a view). I have three tables. The first table defines a common key, let's say "CompanyID". The other two tables have a sometimes-common field, let's say "EmployeeName". I want a single table result that, when my WHERE clause says "WHERE CompanyID = 12" looks like this: CompanyID | TableA | TableB 12 | John Doe | John Doe 12 | Betty Sue | NULL 12 | NULL | Billy Bob I've tried a FULL OUTER JOIN that looks like this: SELECT Company.CompanyID, TableA.EmployeeName, TableB.EmployeeName FROM Company FULL OUTER JOIN TableA ON Company.CompanyID = TableA.CompanyID FULL OUTER JOIN TableB ON Company.CompanyID = TableB.CompanyID AND (TableA.EmployeeName IS NULL OR TableB.EmployeeName IS NULL OR TableB.EmployeeName = TableA.EmployeeName) I'm only getting the NULL from one matched table, I'm not getting the expansion for the other table. In the above sample, I'm basically only getting the first and third rows and not the second. Can someone help me create this query and show me how this is done correctly? BTW I already have a stored procedure that looks very clean and populates an in-memory table, but that isn't what I want. Thanks.

    Read the article

  • Most optimal order (of joins) for left join

    - by Ram
    I have 3 tables Table1 (with 1020690 records), Table2(with 289425 records), Table 3(with 83692 records).I have something like this SELECT * FROM Table1 T1 /* OK fine select * is bad when not all columns are needed, this is just an example*/ LEFT JOIN Table2 T2 ON T1.id=T2.id LEFT JOIN Table3 T3 ON T1.id=T3.id and a query like this SELECT * FROM Table1 T1 LEFT JOIN Table3 T3 ON T1.id=T3.id LEFT JOIN Table2 T2 ON T1.id=T2.id The query plan shows me that it uses 2 Merge Join for both the joins. For the first query, the first merge is with T1 and T2 and then with T3. For the second query, the first merge is with T1 and T3 and then with T2. Both these queries take about the same time(40 seconds approx.) or sometimes Query1 takes couple of seconds longer. So my question is, does the join order matter ?

    Read the article

  • LINQ to SQL - Left Outer Join with multiple join conditions

    - by dan
    I have the following SQL which I am trying to translate to LINQ: SELECT f.value FROM period as p LEFT OUTER JOIN facts AS f ON p.id = f.periodid AND f.otherid = 17 WHERE p.companyid = 100 I have seen the typical implementation of the left outer join (ie. into x from y in x.DefaultIfEmpty() etc.) but am unsure how to introduce the other join condition ('AND f.otherid = 17') EDIT Why is the 'AND f.otherid = 17' condition part of the JOIN instead of in the WHERE clause? Because f may not exist for some rows and I still want these rows to be included. If the condition is applied in the WHERE clause, after the JOIN - then I don't get the behaviour I want. Unfortunately this: from p in context.Periods join f in context.Facts on p.id equals f.periodid into fg from fgi in fg.DefaultIfEmpty() where p.companyid == 100 && fgi.otherid == 17 select f.value seems to be equivalent to this: SELECT f.value FROM period as p LEFT OUTER JOIN facts AS f ON p.id = f.periodid WHERE p.companyid = 100 && AND f.otherid = 17 which is not quite what I'm after.

    Read the article

  • Creating New Scripts Dynamically in Lua

    - by bazola
    Right now this is just a crazy idea that I had, but I was able to implement the code and get it working properly. I am not entirely sure of what the use cases would be just yet. What this code does is create a new Lua script file in the project directory. The ScriptWriter takes as arguments the file name, a table containing any arguments that the script should take when created, and a table containing any instance variables to create by default. My plan is to extend this code to create new functions based on inputs sent in during its creation as well. What makes this cool is that the new file is both generated and loaded dynamically on the fly. Theoretically you could get this code to generate and load any script imaginable. One use case I can think of is an AI that creates scripts to map out it's functions, and creates new scripts for new situations or environments. At this point, this is all theoretical, though. Here is the test code that is creating the new script and then immediately loading it and calling functions from it: function Card:doScriptWriterThing() local scriptName = "ScriptIAmMaking" local scripter = scriptWriter:new(scriptName, {"argumentName"}, {name = "'test'", one = 1}) scripter:makeFileForLoadedSettings() local loadedScript = require (scriptName) local scriptInstance = loadedScript:new("sayThis") print(scriptInstance:get_name()) --will print test print(scriptInstance:get_one()) -- will print 1 scriptInstance:set_one(10000) print(scriptInstance:get_one()) -- will print 10000 print(scriptInstance:get_argumentName()) -- will print sayThis scriptInstance:set_argumentName("saySomethingElse") print(scriptInstance:get_argumentName()) --will print saySomethingElse end Here is ScriptWriter.lua local ScriptWriter = {} local twoSpaceIndent = " " local equalsWithSpaces = " = " local newLine = "\n" --scriptNameToCreate must be a string --argumentsForNew and instanceVariablesToCreate must be tables and not nil function ScriptWriter:new(scriptNameToCreate, argumentsForNew, instanceVariablesToCreate) local instance = setmetatable({}, { __index = self }) instance.name = scriptNameToCreate instance.newArguments = argumentsForNew instance.instanceVariables = instanceVariablesToCreate instance.stringList = {} return instance end function ScriptWriter:makeFileForLoadedSettings() self:buildInstanceMetatable() self:buildInstanceCreationMethod() self:buildSettersAndGetters() self:buildReturn() self:writeStringsToFile() end --very first line of any script that will have instances function ScriptWriter:buildInstanceMetatable() table.insert(self.stringList, "local " .. self.name .. " = {}" .. newLine) table.insert(self.stringList, newLine) end --every script made this way needs a new method to create its instances function ScriptWriter:buildInstanceCreationMethod() --new() function declaration table.insert(self.stringList, ("function " .. self.name .. ":new(")) self:buildNewArguments() table.insert(self.stringList, ")" .. newLine) --first line inside :new() function table.insert(self.stringList, twoSpaceIndent .. "local instance = setmetatable({}, { __index = self })" .. newLine) --add designated arguments inside :new() self:buildNewArgumentVariables() --create the instance variables with the loaded values for key,value in pairs(self.instanceVariables) do table.insert(self.stringList, twoSpaceIndent .. "instance." .. key .. equalsWithSpaces .. value .. newLine) end --close the :new() function table.insert(self.stringList, twoSpaceIndent .. "return instance" .. newLine) table.insert(self.stringList, "end" .. newLine) table.insert(self.stringList, newLine) end function ScriptWriter:buildNewArguments() --if there are arguments for :new(), add them for key,value in ipairs(self.newArguments) do table.insert(self.stringList, value) table.insert(self.stringList, ", ") end if next(self.newArguments) ~= nil then --makes sure the table is not empty first table.remove(self.stringList) --remove the very last element, which will be the extra ", " end end function ScriptWriter:buildNewArgumentVariables() --add the designated arguments to :new() for key, value in ipairs(self.newArguments) do table.insert(self.stringList, twoSpaceIndent .. "instance." .. value .. equalsWithSpaces .. value .. newLine) end end --the instance variables need separate code because their names have to be the key and not the argument name function ScriptWriter:buildSettersAndGetters() for key,value in ipairs(self.newArguments) do self:buildArgumentSetter(value) self:buildArgumentGetter(value) table.insert(self.stringList, newLine) end for key,value in pairs(self.instanceVariables) do self:buildInstanceVariableSetter(key, value) self:buildInstanceVariableGetter(key, value) table.insert(self.stringList, newLine) end end --code for arguments passed in function ScriptWriter:buildArgumentSetter(variable) table.insert(self.stringList, "function " .. self.name .. ":set_" .. variable .. "(newValue)" .. newLine) table.insert(self.stringList, twoSpaceIndent .. "self." .. variable .. equalsWithSpaces .. "newValue" .. newLine) table.insert(self.stringList, "end" .. newLine) end function ScriptWriter:buildArgumentGetter(variable) table.insert(self.stringList, "function " .. self.name .. ":get_" .. variable .. "()" .. newLine) table.insert(self.stringList, twoSpaceIndent .. "return " .. "self." .. variable .. newLine) table.insert(self.stringList, "end" .. newLine) end --code for instance variable values passed in function ScriptWriter:buildInstanceVariableSetter(key, variable) table.insert(self.stringList, "function " .. self.name .. ":set_" .. key .. "(newValue)" .. newLine) table.insert(self.stringList, twoSpaceIndent .. "self." .. key .. equalsWithSpaces .. "newValue" .. newLine) table.insert(self.stringList, "end" .. newLine) end function ScriptWriter:buildInstanceVariableGetter(key, variable) table.insert(self.stringList, "function " .. self.name .. ":get_" .. key .. "()" .. newLine) table.insert(self.stringList, twoSpaceIndent .. "return " .. "self." .. key .. newLine) table.insert(self.stringList, "end" .. newLine) end --last line of any script that will have instances function ScriptWriter:buildReturn() table.insert(self.stringList, "return " .. self.name) end function ScriptWriter:writeStringsToFile() local fileName = (self.name .. ".lua") file = io.open(fileName, 'w') for key,value in ipairs(self.stringList) do file:write(value) end file:close() end return ScriptWriter And here is what the code provided will generate: local ScriptIAmMaking = {} function ScriptIAmMaking:new(argumentName) local instance = setmetatable({}, { __index = self }) instance.argumentName = argumentName instance.name = 'test' instance.one = 1 return instance end function ScriptIAmMaking:set_argumentName(newValue) self.argumentName = newValue end function ScriptIAmMaking:get_argumentName() return self.argumentName end function ScriptIAmMaking:set_name(newValue) self.name = newValue end function ScriptIAmMaking:get_name() return self.name end function ScriptIAmMaking:set_one(newValue) self.one = newValue end function ScriptIAmMaking:get_one() return self.one end return ScriptIAmMaking All of this is generated with these calls: local scripter = scriptWriter:new(scriptName, {"argumentName"}, {name = "'test'", one = 1}) scripter:makeFileForLoadedSettings() I am not sure if I am correct that this could be useful in certain situations. What I am looking for is feedback on the readability of the code, and following Lua best practices. I would also love to hear whether this approach is a valid one, and whether the way that I have done things will be extensible.

    Read the article

  • Would a professional, self taught programmer benefit from reading an algorithms book?

    - by user65483
    I'm a 100% self taught, professional programmer (I've worked at a few web startups and made a few independent games). I've read quite a few of the "essential" books (Clean Code, The Pragmatic Programmer, Code Complete, SICP, K&R). I'm considering reading Introduction to Algorithms. I've asked a few colleagues if reading it will improve my programming skills, and I got very mixed answers. A few said yes, a few said no, and a one said "only if you spend a lot of time implementing these algorithms" (I don't). So, I figured I'd ask Stack Exchange. Is it worth the time to read about algorithms if you're a professional programmer who seldom needs to use complex algorithms? For what it's worth, I have a strong mathematical background (have a 2 year degree in Mathematics; took Linear Algebra, Differential Equations, Calc I-III).

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >