Search Results

Search found 33585 results on 1344 pages for 'sql execution plan'.

Page 53/1344 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Bulk update + SQL + self join

    - by Nev_Rahd
    Hello All I would like to update a column in Table with reference to other column(s) in same table. Ex: As in figure below - I would like to update effective endate with min date whichever is greater than effective Start Date of currrent record. How can this be acheived in T-SQL. Can this be done with single update statement ? Thanks.

    Read the article

  • How do you unit test your T-SQL

    - by AlexKuznetsov
    How do you unit test your T-SQL? Which libraries/tools do you use? What percentage of your code is covered by unit tests and how do you measure it? Do you think the time and effort which you invested in your unit testing harness has paid off or not? If you do not use unit testing, can you explain why not?

    Read the article

  • Ordering sql query results

    - by user343409
    My sql query gives the columns: product_id (which is an integer) pnl (which is float - can be negative) I get more than 100 rows. I want to filter out the top 40 rows based on abs(pnl). But the results should be ordered by pnl column only and not by abs(pnl). I want to do this for MSSQL 2005. Is there a way to do this?

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Oracle Open World 2012: SQL Developer Recap

    - by thatjeffsmith
    Last week was the ‘big show’ in San Francisco. I was very happy to meet many of you in person. And many of you had questions – lots of questions! We had full or overflowing rooms for our sessions and hands-on-labs. The SQL Developer ‘booths’ were also slammed several times. So exciting to see so many of YOU excited about SQL Developer. It’s very cool to hear the stories of our tools saving you and your organizations so much time (and money!) Instead of doing a Day 0 – Day 9 recap, I thought I’d share with you the questions that I heard more than once. And just for giggles, I’ll throw in some answers as well So in no particular order… What’s the difference between Oracle SQL Developer & Oracle SQL Developer Data Modeler? Mathematically speaking – two words. But as far as the actual modeling features go, there’s no difference between the two applications. The same ‘code’ or features as it pertains to data modeling and design are in both tools. However, in SQL Developer you have all of the OTHER features fighting for real estate in the UI. So I have a general rule of thumb – if you spend MOST of your time in the database, use SQL Developer. And if you spend most of your time in the data model, run the separate and dedicated program, Oracle SQL Developer Data Modeler. Here’s a couple of screenshots to drive home the UI point: Oracle SQL Developer Oracle SQL Developer Data Modeler running INSIDE of SQL Developer. Notice how the Modeler menu items fold under the file menu? Oracle SQL Developer Data Modeler Easier to navigate and manipulate your models with the stand alone modeler. Just no worksheet to run your ad-hoc queries, etc. Don’t forget you can disable the Data Modeler inside of SQL Developer via the Extensions preference page. How can I model my table partitions? Partitioning is defined via the Physical model. So after you have finished your relational model, you need to generate a physical model. Oracle SQL Developer Data Modeler Physical Model and Partitioning Open the properties for your physical model table. Enable the ‘partitioned’ property. Once you do so, the ‘Partitioning’ page will activate. Lots and lots of partitioning support and options here But what about Interval Partitioning? An extension of range partitioning in 11gR2, we don’t currently support this partitioning scheme in SQL Developer. But we’re working on it! Can SQL Developer ignore column order when comparing models? Yes! After you start a model compare, one of your options is to disregard the order of an attribute or column definition. Tell SQL Developer you don’t care when your column shows up, just as long as it DOES show up. Wow, you got a lot of questions around modeling! Is that normal? Yes! While we appreciate that many folks inherit their applications and associated designs, new applications are being ‘born’ every day. Since both of our tools are free for anyone to design their new Oracle applications with, we attract a fair amount of attention I want to do a Hands On Lab. How do I get your software and instructional guides? Go here. Download VirtualBox. Then download the VB image. Import the appliance. Start it. Connect oracle/oracle on the OEL VM. Click on ‘Start Here’ in the desktop. Follow the instructions. If you need help, ask away! You went too fast in your Tips & Tricks session. Do you have cliff notes? Yes! And you’re SO close to finding them! Just go to my SQL Developer resources page. All of my tips are documented on this blog somewhere. I’ve indexed the most popular ones on the resource page. You can use the Search dialog on the right to find the rest. Or just send me a comment or question, and I’ll do my best to answer them as they come in.

    Read the article

  • SQL Server 2005 - Syncing development/production databases

    - by hamlin11
    I've got a rather large SQL Server 2005 database that is under constant development. Every so often, I either get a new developer or need to deploy wide-scale schema changes to the production server. My main concern is deploying schema + data updates to developer machines from the "master" development copy. Is there some built-in functionality or tools for publishing schema + data in such a fashion? I'd like it to take as little time as possible. Can it be done from within SSMS? Thanks in advance for your time

    Read the article

  • Conditional Operator in SQL Where Clause

    - by Marc
    I'm wishing I could do something like the following in SQl Server 2005 (which I know isnt valid) for my where clause. Sometimes @teamID (passed into a stored procedure) will be the value of an existing teamID, otherwise it will always be zero and I want all rows from the Team table. I researched using Case and the operator needs to come before or after the entire statement which prevents me from having a different operator based on the value of @teamid. Any suggestions other than duplicating my select statements. declare @teamid int set @teamid = 0 Select Team.teamID From Team case @teamid when 0 then WHERE Team.teamID > 0 else WHERE Team.teamID = @teamid end

    Read the article

  • Find earliest and latest dates of specified records from a table using SQL

    - by tonyyeb
    Hi I have a table (in MS SQL 2005) with a selection of dates. I want to be able to apply a WHERE statement to return a group of them and then return which date is the earliest from one column and which one is the latest from another column. Here is an example table: ID StartDate EndDate Person 1 01/03/2010 03/03/2010 Paul 2 12/05/2010 22/05/2010 Steve 3 04/03/2101 08/03/2010 Paul So I want to return all the records where Person = 'Paul'. But return something like (earliest ) StartDate = 01/03/2010 (from record ID 1) and (latest) EndDate = 08/03/2010 (from record ID 3). Thanks in advance

    Read the article

  • Changing SQL Server Database sorting

    - by plaisthos
    I have a request to change the collation of a SQL Server Database: ALTER DATABASE solarwind95 collate SQL_Latin1_General_CP1_CI_AS but I get this strange error: Meldung 5075, Ebene 16, Status 1, Zeile 1 Das 'Spalte'-Objekt 'CustomPollerAssignment.PollerID' ist von 'Datenbanksortierung' abhängig. Die Datenbanksortierung kann nicht geändert werden, wenn ein schemagebundenes Objekt von ihr abhängig ist. Entfernen Sie die Anhängigkeiten der Datenbanksortierung, und wiederholen Sie den Vorgang. Sorry for the german errror message. I do not know how to switch the language to english, but here is a translation: Translation: Message 5075, Layer 16, Status 1, Row 1 The 'column' object 'CustomPollerAssignment.PollerID' depends on 'Database sorting. The database sorting cannot be changed if a schema bound object depends on it. Remove the dependency of the database sortieren and retry. I got a ton more of the errors like that.

    Read the article

  • SQL server 2005 - user rights

    - by Paresh
    I have created one user named "tuser" with create database rights in SQL server 2005. and given the 'db_owner' database role of master and msdb database to "tuser". From this user login when I run the script for create database then it will create new database. But "tuser" don't have access that newly created database generated from script. Any one have any idea?, I want to write the script so "tuser" have access that new created database after creation and can have add user permission of newly created database. I want to give 'db_owner' database roles to "tuser" on that newly created database in the same script which create new database. The script run under 'tuser'.

    Read the article

  • send dbmail on @@error from sql server 2005

    - by Ved
    Hi, I am trying to send database mail when error occurs inside the transaction.My setup for dbo.sp_send_dbmail is correct , when I execute the proc I do get an email within 1 min. However when I try to use dbo.sp_send_dbmail inside another proc within transactions than I do not get the email. Sql server does show in the result window that "Mail queued" but I never receive it. BEGIN TRANSACTION DECLARE @err int DECLARE @test nvarchar(max) RAISERROR('This is a test', 16, 1) SELECT @err = @@ERROR IF @err < 0 BEGIN SET @test = error_message() EXEC msdb.dbo.sp_send_dbmail @recipients= '[email protected]', @body = 'test inside', @subject = 'Error with proc', @body_format = 'HTML', @append_query_error = 1, @profile_name ='Database Mail Profile'; ROLLBACK TRANSACTION RETURN END COMMIT TRANSACTION And I get result as Msg 50000, Level 16, State 1, Line 7 This is a test Mail queued.

    Read the article

  • Is it possible to create a new T-SQL Operator using CLR Code in SQL Server?

    - by Eoin Campbell
    I have a very simple CLR Function for doing Regex Matching public static SqlBoolean RegExMatch(SqlString input, SqlString pattern) { if (input.IsNull || pattern.IsNull) return SqlBoolean.False; return Regex.IsMatch(input.Value, pattern.Value, RegexOptions.IgnoreCase); } It allows me to write a SQL Statement Like. SELECT * FROM dbo.table1 WHERE dbo.RegexMatch(column1, '[0-9][A-Z]') = 1 -- match entries in col1 like 1A, 2B etc... I'm just thinking it would be nice to reformulate that query so it could be called like SELECT * FROM dbo.table1 WHERE column1 REGEXLIKE '[0-9][A-Z]' Is it possible to create new comparison operators using CLR Code. (I'm guessing from my brief glance around the web that the answer is NO, but no harm asking)

    Read the article

  • How to make awkward pivot of sql table in MS SQL Server 2005?

    - by Oliver
    I have to rotate a given table from an sql server but a normal pivot just doesn't work (as far as i tried). So has anybody an idea how to rotate the table into the desired format? Just to make the problem more complicated, the list of given labels can vary and it is possible that a new label name can come into at any given time. Given Data ID | Label | Numerator | Denominator | Ratio ---+-----------------+-------------+---------------+-------- 1 | LabelNameOne | 41 | 10 | 4,1 1 | LabelNameTwo | 0 | 0 | 0 1 | LabelNameThree | 21 | 10 | 2,1 1 | LabelNameFour | 15 | 10 | 1,5 2 | LabelNameOne | 19 | 19 | 1 2 | LabelNameTwo | 0 | 0 | 0 2 | LabelNameThree | 15 | 16 | 0,9375 2 | LabelNameFive | 19 | 19 | 1 2 | LabelNameSix | 17 | 17 | 1 3 | LabelNameOne | 12 | 12 | 1 3 | LabelNameTwo | 0 | 0 | 0 3 | LabelNameThree | 11 | 12 | 0,9167 3 | LabelNameFour | 12 | 12 | 1 3 | LabelNameSix | 0 | 1 | 0 Wanted result ID | ValueType | LabelNameOne | LabelNameTwo | LabelNameThree | LabelNameFour | LabelNameFive | LabelNameSix ---+-------------+--------------+--------------+----------------+---------------+---------------+-------------- 1 | Numerator | 41 | 0 | 21 | 15 | | 1 | Denominator | 10 | 0 | 10 | 10 | | 1 | Ratio | 4,1 | 0 | 2,1 | 1,5 | | 2 | Numerator | 19 | 0 | 15 | | 19 | 17 2 | Denominator | 19 | 0 | 16 | | 19 | 17 2 | Ratio | 1 | 0 | 0,9375 | | 1 | 1 3 | Numerator | 12 | 0 | 11 | 12 | | 0 3 | Denominator | 12 | 0 | 12 | 12 | | 1 3 | Ratio | 1 | 0 | 0,9167 | 1 | | 0

    Read the article

  • SQL CHECK constraint issues

    - by blahblah
    I'm using SQL Server 2008 and I have a table with three columns: Length, StartTime and EndTime. I want to make a CHECK constraint on this table which says that: if Length == NULL then StartTime <> NULL and EndTime <> NULL else StartTime == NULL and EndTime == NULL I've begun to try things like this: Length == NULL AND StartTime <> NULL AND EndTime <> NULL Obviously this is not enough, but even this simple expression will not validate. I get the error: "Error validating 'CK_Test_Length_Or_Time'. Do you want to edit the constraint?" Any ideas on how to go about doing this?

    Read the article

  • How to display all the dates between two given dates in SQL

    - by Gopal
    Using SQL server 2000. If the Start date is 06/23/2008 and End date is 06/30/2008 Then I need the Output of query as 06/23/2008 06/24/2008 06/25/2008 . . . 06/30/2008 I Created a Table names as Integer which has 1 Column, column values are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 then I used the below mentioned query Tried Query SELECT DATEADD(d, H.i * 100 + T .i * 10 + U.i, '" & dtpfrom.Value & "') AS Dates FROM integers H CROSS JOIN integers T CROSS JOIN integers U order by dates The above query is displaying 999 Dates only. 999 Dates means (365 + 365 + 269) Dates Only. Suppose I want to select more than 3 Years (01/01/2003 to 01/01/2008). The above query should not suitable. How to modify my query? Or any other query is available for the above condition. Please kindly provide me the Query.

    Read the article

  • to_date in SQL Server 2005

    - by Chin
    Does any one know how I would have to change the following to work with ms sql? WHERE registrationDate between to_date ('2003/01/01', 'yyyy/mm/dd') AND to_date ('2003/12/31', 'yyyy/mm/dd'); What I have read implies I would have to construct it using DATEPART() which could become very long winded. Especially when the goal would be to compare on dates which I receive in the following format "2003-12-30 10:07:42". It would be nice to pass them off to the database as is. Any pointers appreciated.

    Read the article

  • Complex select query question for hardcore SQL designers

    - by eugeneK
    Very complex query been trying to construct it for few days with more real success. I'm using SQL-SERVER 2005 Standard What i need is : 5 CampaignVariants from Campaigns whereas 2 are with the largest PPU number set and 3 are random. Next condition is that CampaignDailyBudget and CampaignTotalBudget are below what is set in Campaign ( calculation is number of clicks in Visitors table connected to Campaigns via CampaignVariants on which users click) Next condition CampaignLanguage, CampaignCategory, CampaignRegion and CampaignCountry must be the ones i send to this select with (languageID,categoryID,regionID and countryID). Next condition is that IP address i send to this select statement won't be in IPs list for current Campaign ( i delete inactive for 24 hours IPs ). In other words it gets 5 CampaignVariants for user that enters the site, when i take from user PublisherRegionUID,IP,Language,Country and Region view diagram

    Read the article

  • How to make awkward pivot of sql table in SQL Server 2005?

    - by Oliver
    I have to rotate a given table from an SQL Server but a normal pivot just doesn't work (as far as i tried). So has anybody an idea how to rotate the table into the desired format? Just to make the problem more complicated, the list of given labels can vary and it is possible that a new label name can come into at any given time. Given Data ID | Label | Numerator | Denominator | Ratio ---+-----------------+-------------+---------------+-------- 1 | LabelNameOne | 41 | 10 | 4,1 1 | LabelNameTwo | 0 | 0 | 0 1 | LabelNameThree | 21 | 10 | 2,1 1 | LabelNameFour | 15 | 10 | 1,5 2 | LabelNameOne | 19 | 19 | 1 2 | LabelNameTwo | 0 | 0 | 0 2 | LabelNameThree | 15 | 16 | 0,9375 2 | LabelNameFive | 19 | 19 | 1 2 | LabelNameSix | 17 | 17 | 1 3 | LabelNameOne | 12 | 12 | 1 3 | LabelNameTwo | 0 | 0 | 0 3 | LabelNameThree | 11 | 12 | 0,9167 3 | LabelNameFour | 12 | 12 | 1 3 | LabelNameSix | 0 | 1 | 0 Wanted result ID | ValueType | LabelNameOne | LabelNameTwo | LabelNameThree | LabelNameFour | LabelNameFive | LabelNameSix ---+-------------+--------------+--------------+----------------+---------------+---------------+-------------- 1 | Numerator | 41 | 0 | 21 | 15 | | 1 | Denominator | 10 | 0 | 10 | 10 | | 1 | Ratio | 4,1 | 0 | 2,1 | 1,5 | | 2 | Numerator | 19 | 0 | 15 | | 19 | 17 2 | Denominator | 19 | 0 | 16 | | 19 | 17 2 | Ratio | 1 | 0 | 0,9375 | | 1 | 1 3 | Numerator | 12 | 0 | 11 | 12 | | 0 3 | Denominator | 12 | 0 | 12 | 12 | | 1 3 | Ratio | 1 | 0 | 0,9167 | 1 | | 0

    Read the article

  • SQL - Two foreign keys that have a dependency between them

    - by Brian
    The current structure is as follows: Table RowType: RowTypeID Table RowSubType: RowSubTypeID FK_RowTypeID Table ColumnDef: FK_RowTypeID FK_RowSubTypeID (nullable) In short, I'm mapping column definitions to rows. In some cases, those rows have subtype(s), which will have column definitions specific to them. Alternatively, I could hang those column definitions that are specific to subtypes off their own table, or I could combine the data in RowType and RowSubType into one table and work with a single ID, but I'm not sure either is a better solution (if anything, I'd lean towards the latter, as we mostly end up pulling ColumnDefs for a given RowType/RowSubType). Is the current design SQL Blasphemy? If I keep the current structuree, how do I maintain that if RowSubTypeID is specified in ColumnDef, that it must correspond to the RowType specified by RowTypeID? Should I try to enforce this with a trigger or am I missing a simple redesign that would solve the problem?

    Read the article

  • Problems when going from SQL 2005 to SQL 2008

    - by Nezdet
    Hi! I did go over from SQL server 2005 to 2008. Doing that gave me some problems with the fulltext search. This site is based on Fulltext search. It occurs more deadlocks, the search is slower and sometimes it return empty lists, don't know why. A lot of people has been writning about they having this problem with 2008. But I haven'tgot any solutions why 2005 worked better for my program.. PLS help me out!

    Read the article

  • SQL Server: query database user roles for all databases in server

    - by atricapilla
    I would like to make a query for database user roles for all databases in my sql server instance. I modified a query from sp_helpuser: select u.name ,case when (r.principal_id is null) then 'public' else r.name end ,l.default_database_name ,u.default_schema_name ,u.principal_id from sys.database_principals u left join (sys.database_role_members m join sys.database_principals r on m.role_principal_id = r.principal_id) on m.member_principal_id = u.principal_id left join sys.server_principals l on u.sid = l.sid where u.type <> 'R' How can I modify this to query from all databases? What is the link between sys.databases and sys.database_principals?

    Read the article

  • Sql Calculation And Sort By Date

    - by mahesh
    I have Confusion against utilize If,Else Statement against calculation of stock By date. And sort the same by date. There is real challenge to calculate running total between equal date: If date is equal If date is greater than If date is less than My Table Schema Is: TransID int, Auto Increment Date datetime, Inwards decimal(12,2) Outward decimal(12,2) Suppose If I have Records as Below: TransID Date(DD/MM/YYYY) Inward Outward 1 03/02/2011 100 2 12/04/2010 200 3 03/02/2011 400 Than Result Should be: TransID Date(DD/MM/YYYY) Inward Outward Balance 1 03/02/2011 100 -100 2 12/04/2010 200 -300 3 03/02/2011 400 100 I wants to calculate Inward - outwards = Balance and Balance count as running total as above. but the condition that it should be as per date order How to sort and calculate it by date and transID? What is transact SQL IN SQL_SERVER-2000**?.

    Read the article

  • SQL Server union selects built dynamically from list of words

    - by Adam Tuttle
    I need to count occurrence of a list of words across all records in a given table. If I only had 1 word, I could do this: select count(id) as NumRecs where essay like '%word%' But my list could be hundreds or thousands of words, and I don't want to create hundreds or thousands of sql requests serially; that seems silly. I had a thought that I might be able to create a stored procedure that would accept a comma-delimited list of words, and for each word, it would run the above query, and then union them all together, and return one huge dataset. (Sounds reasonable, right? But I'm not sure where to start with that approach...) Short of some weird thing with union, I might try to do something with a temp table -- inserting a row for each word and record count, and then returning select * from that temp table. If it's possible with a union, how? And does one approach have advantages (performance or otherwise) over the other?

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >