Search Results

Search found 8547 results on 342 pages for 'hash join'.

Page 113/342 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • “Query cost (relative to the batch)” <> Query cost relative to batch

    - by Dave Ballantyne
    OK, so that is quite a contradictory title, but unfortunately it is true that a common misconception is that the query with the highest percentage relative to batch is the worst performing.  Simply put, it is a lie, or more accurately we dont understand what these figures mean. Consider the two below simple queries: SELECT * FROM Person.BusinessEntity JOIN Person.BusinessEntityAddress ON Person.BusinessEntity.BusinessEntityID = Person.BusinessEntityAddress.BusinessEntityID go SELECT * FROM Sales.SalesOrderDetail JOIN Sales.SalesOrderHeader ON Sales.SalesOrderDetail.SalesOrderID = Sales.SalesOrderHeader.SalesOrderID After executing these and looking at the plans, I see this : So, a 13% / 87% split ,  but 13% / 87% of WHAT ? CPU ? Duration ? Reads ? Writes ? or some magical weighted algorithm ?  In a Profiler trace of the two we can find the metrics we are interested in. CPU and duration are well out but what about reads (210 and 1935)? To save you doing the maths, though you are more than welcome to, that’s a 90.2% / 9.8% split.  Close, but no cigar. Lets try a different tact.  Looking at the execution plan the “Estimated Subtree cost” of query 1 is 0.29449 and query 2 its 1.96596.  Again to save you the maths that works out to 13.03% and 86.97%, round those and thats the figures we are after.  But, what is the worrying word there ? “Estimated”.  So these are not “actual”  execution costs,  but what’s the problem in comparing the estimated costs to derive a meaning of “Most Costly”.  Well, in the case of simple queries such as the above , probably not a lot.  In more complicated queries , a fair bit. By modifying the second query to also show the total number of lines on each order SELECT *,COUNT(*) OVER (PARTITION BY Sales.SalesOrderDetail.SalesOrderID) FROM Sales.SalesOrderDetail JOIN Sales.SalesOrderHeader ON Sales.SalesOrderDetail.SalesOrderID = Sales.SalesOrderHeader.SalesOrderID The split in percentages is now 6% / 94% and the profiler metrics are : Even more of a discrepancy. Estimates can be out with actuals for a whole host of reasons,  scalar UDF’s are a particular bug bear of mine and in-fact the cost of a udf call is entirely hidden inside the execution plan.  It always estimates to 0 (well, a very small number). Take for instance the following udf Create Function dbo.udfSumSalesForCustomer(@CustomerId integer) returns money as begin Declare @Sum money Select @Sum= SUM(SalesOrderHeader.TotalDue) from Sales.SalesOrderHeader where CustomerID = @CustomerId return @Sum end If we have two statements , one that fires the udf and another that doesn't: Select CustomerID from Sales.Customer order by CustomerID go Select CustomerID,dbo.udfSumSalesForCustomer(Customer.CustomerID) from Sales.Customer order by CustomerID The costs relative to batch is a 50/50 split, but the has to be an actual cost of firing the udf. Indeed profiler shows us : No where even remotely near 50/50!!!! Moving forward to window framing functionality in SQL Server 2012 the optimizer sees ROWS and RANGE ( see here for their functional differences) as the same ‘cost’ too SELECT SalesOrderDetailID,SalesOrderId, SUM(LineTotal) OVER(PARTITION BY salesorderid ORDER BY Salesorderdetailid RANGE unbounded preceding) from Sales.SalesOrderdetail go SELECT SalesOrderDetailID,SalesOrderId, SUM(LineTotal) OVER(PARTITION BY salesorderid ORDER BY Salesorderdetailid Rows unbounded preceding) from Sales.SalesOrderdetail By now it wont be a great display to show you the Profiler trace reads a *tiny* bit different. So moral of the story, Percentage relative to batch can give a rough ‘finger in the air’ measurement, but dont rely on it as fact.

    Read the article

  • Design a T-shirt for .NET Reflector Pro

    - by Laila
    Win a .NET Reflector Pro license, a box of Red Gate goodies, and a t-shirt printed with your design! Red Gate likes t-shirts. Each of our teams has one. In fact, each individual person has one, numbered according to when they joined the company: Red Gate's 1st, 2nd, and so on right up to Red Gate's 170th, with the slogan "More than just a number". Those t-shirts are important, chiefly because they remind the people wearing them that they are important. But that isn't enough. What really makes us great are the people who choose to use our tools. So we'd like to extend our tradition of t-shirts to include you and put the design of our next shirt entirely in your hands. We'd like you to come up with a witty slogan or create an inventive or simply beautiful t-shirt design for .NET Reflector Pro, our add-in for Visual Studio, which allows you to step into decompiled assemblies whilst debugging in Visual Studio. When you're done, post your masterpiece to Twitter with the hash tag #reflectortees, and @redgate will take a look! We'll pick the best design, and the winner will get a licensed copy of .NET Reflector Pro and a box of Red Gate goodies - not to mention a copy of their t-shirt. The winning design will go into production and be worn and given out at tradeshows, conferences, and user group events across the world, proudly bearing the name of their designer. We'll also pick three runners-up who will receive licenses for .NET Reflector Pro. Red Gate goodie box Interested? If you're up for the challenge, then we've got some resources to get you started. Inside the .zip file you'll find high-quality versions of the following: T-shirt templates: don't forget to design the front and the back! Different versions of the .NET Reflector Pro logo and Red Gate logo. Colour sheets to give you an easy reference to the Red Gate colours, including hex and RGB values. You can create and send us as many designs as you like, and each of them will be considered for the prize. To submit your designs, simply tweet including the competition hash tag, #reflectortees, and a link to somewhere we can see your design: either an image hosting site such as Twitpic, Flickr or Picasa, or a personal blog. You will need to create a Twitter account (which is free), if you don't already have one. You only have three limits: The background colour of the t-shirt should be one of our brand colours (red, light/dark grey or black), though you're welcome to use other colours in the rest of the design. You need to make use of either the .NET Reflector Pro logo OR the Red Gate logo (please keep them as they are) If you include any text or slogan, stick with just one or two colors for it. Apart from that, go wild. Go and do whatever it is you do when you get creative: whether you walk barefoot on the grass with a pencil and paper, sit cross-legged on a pile of cushions with a laptop, or simply close your eyes and float through a mist of ideas, now is your chance. Make sure you enjoy it. We're looking forward to seeing your creations. Terms and conditions 1. The closing date for entries is June 11th, 2010 (4 p.m. UK time). Red Gate Software Ltd reserves the right to extend the competition deadline at its discretion. If there is a revision, the revised date will be published on this blog and the date for announcing the results will be postponed accordingly. 2. The winning designer will be notified on June 14th, 2010 through Twitter. The winner must claim his/her prize by sending us a high-resolution image of their design via email (i.e. Illustrator EPS files or appropriate format, ideally at 300dpi). If the winner does not come forward within 3 days of the announcement, they will forfeit their prize and another winner will be selected from the runners-up. The names of the winner and runners-up will be posted on this blog by June 18th.  3. Entry is completed on the designer posting a link to their entry in a tweet with the correct hash tag, #reflectortees. 4. Red Gate Software needs to hold the rights to using the winning design in order to put the t-shirt into production. We will make sure that this is fine with the winner before we do so, but if you do not want us holding the rights to your design, please do not submit your designs. We reserve the right to slightly alter or adjust any artwork we decide to use (mainly to make it easier to print), but we will make sure we contact the winner for approval first. The winner will also need to allow us the use of his/her name for purposes of promoting your design. 5. Entries must be entirely your own original work and must not breach any copyright or third party rights. Red Gate Software Ltd will not be made partially or fully liable for any non-original work submitted by you. 6. This competition is free: you do not need to buy anything or be an existing customer to enter. 7. This competition is not open to employees of Red Gate Software Ltd, their families, or any other company directly connected with the administration of this promotion.

    Read the article

  • SQL SERVER – Introduction to LEAD and LAG – Analytic Functions Introduced in SQL Server 2012

    - by pinaldave
    SQL Server 2012 introduces new analytical function LEAD() and LAG(). This functions accesses data from a subsequent row (for lead) and previous row (for lag) in the same result set without the use of a self-join . It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. Let us fun following query. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, LEAD(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID ) LeadValue, LAG(SalesOrderDetailID) OVER (ORDER BY SalesOrderDetailID ) LagValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us following result. When we look at above resultset it is very clear that LEAD function gives us value which is going to come in next line and LAG function gives us value which was encountered in previous line. If we have to generate the same result without using this function we will have to use self join. In future blog post we will see the same. Let us explore this function a bit more. This function not only provide previous or next line but it can also access any line before or after using offset. Let us fun following query, where LEAD and LAG function accesses the row with offset of 2. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, LEAD(SalesOrderDetailID,2) OVER (ORDER BY SalesOrderDetailID ) LeadValue, LAG(SalesOrderDetailID,2) OVER (ORDER BY SalesOrderDetailID ) LagValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us following result. You can see the LEAD and LAG functions  now have interval of  rows when they are returning results. As there is interval of two rows the first two rows in LEAD function and last two rows in LAG function will return NULL value. You can easily replace this NULL Value with any other default value by passing third parameter in LEAD and LAG function. Let us fun following query. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, LEAD(SalesOrderDetailID,2,0) OVER (ORDER BY SalesOrderDetailID ) LeadValue, LAG(SalesOrderDetailID,2,0) OVER (ORDER BY SalesOrderDetailID ) LagValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us following result, where NULL are now replaced with value 0. Just like any other analytic function we can easily partition this function as well. Let us see the use of PARTITION BY in this clause. USE AdventureWorks GO SELECT s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty, LEAD(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ) LeadValue, LAG(SalesOrderDetailID) OVER (PARTITION BY SalesOrderID ORDER BY SalesOrderDetailID ) LagValue FROM Sales.SalesOrderDetail s WHERE SalesOrderID IN (43670, 43669, 43667, 43663) ORDER BY s.SalesOrderID,s.SalesOrderDetailID,s.OrderQty GO Above query will give us following result, where now the data is partitioned by SalesOrderID and LEAD and LAG functions are returning the appropriate result in that window. As now there are smaller partition in my query, you will see higher presence of NULL. In future blog post we will see how this functions are compared to SELF JOIN. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • PASS Summit 2011 &ndash; Part IV

    - by Tara Kizer
    This is the final blog for my PASS Summit 2011 series.  Well okay, a mini-series, I guess. On the last day of the conference, I attended Keith Elmore’ and Boris Baryshnikov’s (both from Microsoft) “Introducing the Microsoft SQL Server Code Named “Denali” Performance Dashboard Reports, Jeremiah Peschka’s (blog|twitter) “Rewrite your T-SQL for Great Good!”, and Kimberly Tripp’s (blog|twitter) “Isolated Disasters in VLDBs”. Keith and Boris talked about the lifecycle of a session, figuring out the running time and the waiting time.  They pointed out the transient nature of the reports.  You could be drilling into it to uncover a problem, but the session may have ended by the time you’ve drilled all of the way down.  Also, the reports are for troubleshooting live problems and not historical ones.  You can use Management Data Warehouse for historical troubleshooting.  The reports provide similar benefits to the Activity Monitor, however Activity Monitor doesn’t provide context sensitive drill through. One thing I learned in Keith’s and Boris’ session was that the buffer cache hit ratio should really never be below 87% due to the read-ahead mechanism in SQL Server.  When a page is read, it will read the entire extent.  So for every page read, you get 7 more read.  If you need any of those 7 extra pages, well they are already in cache.  I had a lot of fun in Jeremiah’s session about refactoring code plus I learned a lot.  His slides were visually presented in a fun way, which just made for a more upbeat presentation.  Jeremiah says that before you start refactoring, you should look at your system.  Investigate missing or too many indexes, out-of-date statistics, and other areas that could be leading to your code running slow.  He talked about code standards.  He suggested using common abbreviations for aliases instead of one-letter aliases.  I’m a big offender of one-letter aliases, but he makes a good point.  He said that join order does not matter to the optimizer, but it does matter to those who have to read your code.  Now let’s get into refactoring! Eliminate useless things – useless/unneeded joins and columns.  If you don’t need it, get rid of it! Instead of using DISTINCT/JOIN, replace with EXISTS Simplify your conditions; use UNION or better yet UNION ALL instead of OR to avoid a scan and use indexes for each union query Branching logic – instead of IF this, IF that, and on and on…use dynamic SQL (sp_executesql, please!) or use a parameterized query in the application Correlated subqueries – YUCK! Replace with a join Eliminate repeated patterns Last, but certainly not least, was Kimberly’s session.  Kimberly is my favorite speaker.  I attended her two-day pre-conference seminar at PASS Summit 2005 as well as a SQL Immersion Event last December.  Did I mention she’s my favorite speaker?  Okay, enough of that. Kimberly’s session was packed with demos.  I had seen some of it in the SQL Immersion Event, but it was very nice to get a refresher on these, especially since I’ve got a VLDB with some growing pains.  One key takeaway from her session is the idea to use a log shipping solution with a load delay, such as 6, 8, or 24 hours behind the primary.  In the case of say an accidentally dropped table in a VLDB, we could retrieve it from the secondary database rather than waiting an eternity for a restore to complete.  Kimberly let us know that in SQL Server 2012 (it finally has a name!), online rebuilds are supported even if there are LOB columns in your table.  This will simplify custom code that intelligently figures out if an online rebuild is possible. There was actually one last time slot for sessions that day, but I had an airplane to catch and my kids to see!

    Read the article

  • SQLAuthority News – Who I Am And How I Got Here – True Story as Blog Post

    - by pinaldave
    Here are few of the sample questions I get every day? “Give me shortcut to become superstar?” “How do I become like you?” “Which book I should read so I know everything?” “Can you share your secret to be successful? I want to know it but do not share with others.” There is generic answer I always give is to work hard and read good educational material or watch good online videos. One of the emails really caught my attention. It was from a friend and SQL Server Expert John Sansom (Blog | Twitter). He wrote if I would like to share my story with the world about “Who I am and How I got Here”. I was very much intrigued with his suggestion. John is one guy I respect a lot. Every single topic he writes, I read it with dedication. I eagerly wait for his Weekly Summary of Best SQL Links. If you have not read them, you are missing something out. Writing a guest post for him was like walking in memory lane. I remembered the time when I was beginning my career and I was bit overconfident and bit naive. I had my share of mistakes when I started my career. As time passed by I realize the truth. Well, we all do mistakes. Though, I am proud that as soon as I know my mistakes I corrected them. I never acted on impulse or when I am angry. I think that alone has helped me analysis the situation better and become better human being. During the course, I have lost my ego and it is replaced by passion. I am much more happy and successful in my work. Quite often people ask me if I am always online and wether I have family or not. Honestly, I am able to work hard because of my family. They support me and they encourage me to be enjoy in what I do. They support everything I do and personally, I do not miss a single occasion to join them in daily chores of fun. If there was a shortcut to success – I want know. I learnt SQL Server hard way and I am still learning. There are so many things, I have to learn. There is not enough time to learn everything which we want to learn. I am constantly working on it every day. I welcome you to join my journey as well. Please join me with my journey to learn SQL Server – more the merrier. I have written a story of my life as a guest post.  Read Here: A Journey to SQL Authority Special thanks to John Sansom (Blog | Twitter) for giving me space to talk my story. Indeed I am honored. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Best Practices, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Generate a merge statement from table structure

    - by Nigel Rivett
    This code generates a merge statement joining on he natural key and checking all other columns to see if they have changed. The full version deals with type 2 processing and an audit trail but this version is useful. Just the insert or update part is handy too. Change the table at the top (spt_values in master in the version) and the join columns for the merge in @nk. The output generated is at the top and the code to run to generate it below. Output merge spt_values a using spt_values b on a.name = b.name and a.number = b.number and a.type = b.type when matched and (1=0 or (a.low b.low) or (a.low is null and b.low is not null) or (a.low is not null and b.low is null) or (a.high b.high) or (a.high is null and b.high is not null) or (a.high is not null and b.high is null) or (a.status b.status) or (a.status is null and b.status is not null) or (a.status is not null and b.status is null) ) then update set low = b.low , high = b.high , status = b.status when not matched by target then insert ( name , number , type , low , high , status ) values ( b.name , b.number , b.type , b.low , b.high , b.status ); Generator set nocount on declare @t varchar(128) = 'spt_values' declare @i int = 0 -- this is the natural key on the table used for the merge statement join declare @nk table (ColName varchar(128)) insert @nk select 'Number' insert @nk select 'Name' insert @nk select 'Type' declare @cols table (seq int, nkseq int, type int, colname varchar(128)) ;with cte as ( select ordinal_position, type = case when columnproperty(object_id(@t), COLUMN_NAME,'IsIdentity') = 1 then 3 when nk.ColName is not null then 1 else 0 end, COLUMN_NAME from information_schema.columns c left join @nk nk on c.column_name = nk.ColName where table_name = @t ) insert @cols (seq, nkseq, type, colname) select ordinal_position, row_number() over (partition by type order by ordinal_position) , type, COLUMN_NAME from cte declare @result table (i int, j int, k int, data varchar(500)) select @i = @i + 1 insert @result (i, data) select @i, 'merge ' + @t + ' a' select @i = @i + 1 insert @result (i, data) select @i, ' using cte b' select @i = @i + 1 insert @result (i, j, data) select @i, nkseq, ' ' + case when nkseq = 1 then 'on' else 'and' end + ' a.' + ColName + ' = b.' + ColName from @cols where type = 1 select @i = @i + 1 insert @result (i, data) select @i, ' when matched and (1=0' select @i = @i + 1 insert @result (i, j, k, data) select @i, seq, 1, ' or (a.' + ColName + ' b.' + ColName + ')' + ' or (a.' + ColName + ' is null and b.' + ColName + ' is not null)' + ' or (a.' + ColName + ' is not null and b.' + ColName + ' is null)' from @cols where type 1 select @i = @i + 1 insert @result (i, data) select @i, ' )' select @i = @i + 1 insert @result (i, data) select @i, ' then update set' select @i = @i + 1 insert @result (i, j, data) select @i, nkseq, ' ' + case when nkseq = 1 then ' ' else ', ' end + colname + ' = b.' + colname from @cols where type = 0 select @i = @i + 1 insert @result (i, data) select @i, ' when not matched by target then insert' select @i = @i + 1 insert @result (i, data) select @i, ' (' select @i = @i + 1 insert @result (i, j, data) select @i, seq, ' ' + case when seq = 1 then ' ' else ', ' end + colname from @cols where type 3 select @i = @i + 1 insert @result (i, data) select @i, ' )' select @i = @i + 1 insert @result (i, data) select @i, ' values' select @i = @i + 1 insert @result (i, data) select @i, ' (' select @i = @i + 1 insert @result (i, j, data) select @i, seq, ' ' + case when seq = 1 then ' ' else ', ' end + 'b.' + colname from @cols where type 3 select @i = @i + 1 insert @result (i, data) select @i, ' );' select data from @result order by i,j,k,data

    Read the article

  • Script to UPDATE STATISTICS with time window

    - by Bill Graziano
    I recently spent some time troubleshooting odd query plans and came to the conclusion that we needed better statistics.  We’ve been running sp_updatestats but apparently it wasn’t sampling enough of the table to get us what we needed.  I have a pretty limited window at night where I can hammer the disks while this runs.  The script below just calls UPDATE STATITICS on all tables that “need” updating.  It defines need as any table whose statistics are older than the number of days you specify (30 by default).  It also has a throttle so it breaks out of the loop after a set amount of time (60 minutes).  That means it won’t start processing a new table after this time but it might take longer than this to finish what it’s doing.  It always processes the oldest statistics first so it will eventually get to all of them.  It defaults to sample 25% of the table.  I’m not sure that’s a good default but it works for now.  I’ve tested this in SQL Server 2005 and SQL Server 2008.  I liked the way Michelle parameterized her re-index script and I took the same approach. CREATE PROCEDURE dbo.UpdateStatistics ( @timeLimit smallint = 60 ,@debug bit = 0 ,@executeSQL bit = 1 ,@samplePercent tinyint = 25 ,@printSQL bit = 1 ,@minDays tinyint = 30 )AS/******************************************************************* Copyright Bill Graziano 2010*******************************************************************/SET NOCOUNT ON;PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Launching...'IF OBJECT_ID('tempdb..#status') IS NOT NULL DROP TABLE #status;CREATE TABLE #status( databaseID INT , databaseName NVARCHAR(128) , objectID INT , page_count INT , schemaName NVARCHAR(128) Null , objectName NVARCHAR(128) Null , lastUpdateDate DATETIME , scanDate DATETIME CONSTRAINT PK_status_tmp PRIMARY KEY CLUSTERED(databaseID, objectID));DECLARE @SQL NVARCHAR(MAX);DECLARE @dbName nvarchar(128);DECLARE @databaseID INT;DECLARE @objectID INT;DECLARE @schemaName NVARCHAR(128);DECLARE @objectName NVARCHAR(128);DECLARE @lastUpdateDate DATETIME;DECLARE @startTime DATETIME;SELECT @startTime = GETDATE();DECLARE cDB CURSORREAD_ONLYFOR select [name] from master.sys.databases where database_id > 4OPEN cDBFETCH NEXT FROM cDB INTO @dbNameWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN SELECT @SQL = ' use ' + QUOTENAME(@dbName) + ' select DB_ID() as databaseID , DB_NAME() as databaseName ,t.object_id ,sum(used_page_count) as page_count ,s.[name] as schemaName ,t.[name] AS objectName , COALESCE(d.stats_date, ''1900-01-01'') , GETDATE() as scanDate from sys.dm_db_partition_stats ps join sys.tables t on t.object_id = ps.object_id join sys.schemas s on s.schema_id = t.schema_id join ( SELECT object_id, MIN(stats_date) as stats_date FROM ( select object_id, stats_date(object_id, stats_id) as stats_date from sys.stats) as d GROUP BY object_id ) as d ON d.object_id = t.object_id where ps.row_count > 0 group by s.[name], t.[name], t.object_id, COALESCE(d.stats_date, ''1900-01-01'') ' SET ANSI_WARNINGS OFF; Insert #status EXEC ( @SQL); SET ANSI_WARNINGS ON; END FETCH NEXT FROM cDB INTO @dbNameENDCLOSE cDBDEALLOCATE cDBDECLARE cStats CURSORREAD_ONLYFOR SELECT databaseID , databaseName , objectID , schemaName , objectName , lastUpdateDate FROM #status WHERE DATEDIFF(dd, lastUpdateDate, GETDATE()) >= @minDays ORDER BY lastUpdateDate ASC, page_count desc, [objectName] ASC OPEN cStatsFETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN IF DATEDIFF(mi, @startTime, GETDATE()) > @timeLimit BEGIN PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + '*** Time Limit Reached ***'; GOTO __DONE; END SELECT @SQL = 'UPDATE STATISTICS ' + QUOTENAME(@dBName) + '.' + QUOTENAME(@schemaName) + '.' + QUOTENAME(@ObjectName) + ' WITH SAMPLE ' + CAST(@samplePercent AS NVARCHAR(100)) + ' PERCENT;'; IF @printSQL = 1 PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + @SQL + ' (Last Updated: ' + CAST(@lastUpdateDate AS VARCHAR(100)) + ')' IF @executeSQL = 1 BEGIN EXEC (@SQL); END END FETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateEND__DONE:CLOSE cStatsDEALLOCATE cStatsPRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Completed.'GO

    Read the article

  • Indexed view deadlocking

    - by Dave Ballantyne
    Deadlocks can be a really tricky thing to track down the root cause of.  There are lots of articles on the subject of tracking down deadlocks, but seldom do I find that in a production system that the cause is as straightforward.  That being said,  deadlocks are always caused by process A needs a resource that process B has locked and process B has a resource that process A needs.  There may be a longer chain of processes involved, but that is the basic premise. Here is one such (much simplified) scenario that was at first non-obvious to its cause: The system has two tables,  Products and Stock.  The Products table holds the description and prices of a product whilst Stock records the current stock level. USE tempdb GO CREATE TABLE Product ( ProductID INTEGER IDENTITY PRIMARY KEY, ProductName VARCHAR(255) NOT NULL, Price MONEY NOT NULL ) GO CREATE TABLE Stock ( ProductId INTEGER PRIMARY KEY, StockLevel INTEGER NOT NULL ) GO INSERT INTO Product SELECT TOP(1000) CAST(NEWID() AS VARCHAR(255)), ABS(CAST(CAST(NEWID() AS VARBINARY(255)) AS INTEGER))%100 FROM sys.columns a CROSS JOIN sys.columns b GO INSERT INTO Stock SELECT ProductID,ABS(CAST(CAST(NEWID() AS VARBINARY(255)) AS INTEGER))%100 FROM Product There is a single stored procedure of GetStock: Create Procedure GetStock as SELECT Product.ProductID,Product.ProductName FROM dbo.Product join dbo.Stock on Stock.ProductId = Product.ProductID where Stock.StockLevel <> 0 Analysis of the system showed that this procedure was causing a performance overhead and as reads of this data was many times more than writes,  an indexed view was created to lower the overhead. CREATE VIEW vwActiveStock With schemabinding AS SELECT Product.ProductID,Product.ProductName FROM dbo.Product join dbo.Stock on Stock.ProductId = Product.ProductID where Stock.StockLevel <> 0 go CREATE UNIQUE CLUSTERED INDEX PKvwActiveStock on vwActiveStock(ProductID) This worked perfectly, performance was improved, the team name was cheered to the rafters and beers all round.  Then, after a while, something else happened… The system updating the data changed,  The update pattern of both the Stock update and the Product update used to be: BEGIN TRAN UPDATE... COMMIT BEGIN TRAN UPDATE... COMMIT BEGIN TRAN UPDATE... COMMIT It changed to: BEGIN TRAN UPDATE... UPDATE... UPDATE... COMMIT Nothing that would raise an eyebrow in even the closest of code reviews.  But after this change we saw deadlocks occuring. You can reproduce this by opening two sessions. In session 1 begin transaction Update Product set ProductName ='Test' where ProductID = 998 Then in session 2 begin transaction Update Stock set Stocklevel = 5 where ProductID = 999 Update Stock set Stocklevel = 5 where ProductID = 998 Hop back to session 1 and.. Update Product set ProductName ='Test' where ProductID = 999 Looking at the deadlock graphs we could see the contention was between two processes, one updating stock and the other updating product, but we knew that all the processes do to the tables is update them.  Period.  There are separate processes that handle the update of stock and product and never the twain shall meet, no reason why one should be requiring data from the other.  Then it struck us,  AH the indexed view. Naturally, when you make an update to any table involved in a indexed view, the view has to be updated.  When this happens, the data in all the tables have to be read, so that explains our deadlocks.  The data from stock is read when you update product and vice-versa. The fix, once you understand the problem fully, is pretty simple, the apps did not guarantee the order in which data was updated.  Luckily it was a relatively simple fix to order the updates and deadlocks went away.  Note, that there is still a *slight* risk of a deadlock occurring, if both a stock update and product update occur at *exactly* the same time.

    Read the article

  • XNA Multiplayer Games and Networking

    - by JoshReuben
    ·        XNA communication must by default be lightweight – if you are syncing game state between players from the Game.Update method, you must minimize traffic. That game loop may be firing 60 times a second and player 5 needs to know if his tank has collided with any player 3 and the angle of that gun turret. There are no WCF ServiceContract / DataContract niceties here, but at the same time the XNA networking stack simplifies the details. The payload must be simplistic - just an ordered set of numbers that you would map to meaningful enum values upon deserialization.   Overview ·        XNA allows you to create and join multiplayer game sessions, to manage game state across clients, and to interact with the friends list ·        Dependency on Gamer Services - to receive notifications such as sign-in status changes and game invitations ·        two types of online multiplayer games: system link game sessions (LAN) and LIVE sessions (WAN). ·        Minimum dev requirements: 1 Xbox 360 console + Creators Club membership to test network code - run 1 instance of game on Xbox 360, and 1 on a Windows-based computer   Network Sessions ·        A network session is made up of players in a game + up to 8 arbitrary integer properties describing the session ·        create custom enums – (e.g. GameMode, SkillLevel) as keys in NetworkSessionProperties collection ·        Player state: lobby, in-play   Session Types ·        local session - for split-screen gaming - requires no network traffic. ·        system link session - connects multiple gaming machines over a local subnet. ·        Xbox LIVE multiplayer session - occurs on the Internet. Ranked or unranked   Session Updates ·        NetworkSession class Update method - must be called once per frame. ·        performs the following actions: o   Sends the network packets. o   Changes the session state. o   Raises the managed events for any significant state changes. o   Returns the incoming packet data. ·        synchronize the session à packet-received and state-change events à no threading issues   Session Config ·        Session host - gaming machine that creates the session. XNA handles host migration ·        NetworkSession properties: AllowJoinInProgress , AllowHostMigration ·        NetworkSession groups: AllGamers, LocalGamers, RemoteGamers   Subscribe to NetworkSession events ·        GamerJoined ·        GamerLeft ·        GameStarted ·        GameEnded – use to return to lobby ·        SessionEnded – use to return to title screen   Create a Session session = NetworkSession.Create(         NetworkSessionType.SystemLink,         maximumLocalPlayers,         maximumGamers,         privateGamerSlots,         sessionProperties );   Start a Session if (session.IsHost) {     if (session.IsEveryoneReady)     {        session.StartGame();        foreach (var gamer in SignedInGamer.SignedInGamers)        {             gamer.Presence.PresenceMode =                 GamerPresenceMode.InCombat;   Find a Network Session AvailableNetworkSessionCollection availableSessions = NetworkSession.Find(     NetworkSessionType.SystemLink,       maximumLocalPlayers,     networkSessionProperties); availableSessions.AllowJoinInProgress = true;   Join a Network Session NetworkSession session = NetworkSession.Join(     availableSessions[selectedSessionIndex]);   Sending Network Data var packetWriter = new PacketWriter(); foreach (LocalNetworkGamer gamer in session.LocalGamers) {     // Get the tank associated with this player.     Tank myTank = gamer.Tag as Tank;     // Write the data.     packetWriter.Write(myTank.Position);     packetWriter.Write(myTank.TankRotation);     packetWriter.Write(myTank.TurretRotation);     packetWriter.Write(myTank.IsFiring);     packetWriter.Write(myTank.Health);       // Send it to everyone.     gamer.SendData(packetWriter, SendDataOptions.None);     }   Receiving Network Data foreach (LocalNetworkGamer gamer in session.LocalGamers) {     // Keep reading while packets are available.     while (gamer.IsDataAvailable)     {         NetworkGamer sender;          // Read a single packet.         gamer.ReceiveData(packetReader, out sender);          if (!sender.IsLocal)         {             // Get the tank associated with this packet.             Tank remoteTank = sender.Tag as Tank;              // Read the data and apply it to the tank.             remoteTank.Position = packetReader.ReadVector2();             …   End a Session if (session.AllGamers.Count == 1)         {             session.EndGame();             session.Update();         }   Performance •        Aim to minimize payload, reliable in order messages •        Send Data Options: o   Unreliable, out of order -(SendDataOptions.None) o   Unreliable, in order (SendDataOptions.InOrder) o   Reliable, out of order (SendDataOptions.Reliable) o   Reliable, in order (SendDataOptions.ReliableInOrder) o   Chat data (SendDataOptions.Chat) •        Simulate: NetworkSession.SimulatedLatency , NetworkSession.SimulatedPacketLoss •        Voice support – NetworkGamer properties: HasVoice ,IsTalking , IsMutedByLocalUser

    Read the article

  • Fetching Partition Information

    - by Mike Femenella
    For a recent SSIS package at work I needed to determine the distinct values in a partition, the number of rows in each partition and the file group name on which each partition resided in order to come up with a grouping mechanism. Of course sys.partitions comes to mind for some of that but there are a few other tables you need to link to in order to grab the information required. The table I’m working on contains 8.8 billion rows. Finding the distinct partition keys from this table was not a fast operation. My original solution was to create  a temporary table, grab the distinct values for the partitioned column, then update via sys.partitions for the rows and the $partition function for the partitionid and finally look back to the sys.filegroups table for the filegroup names. It wasn’t pretty, it could take up to 15 minutes to return the results. The primary issue is pulling distinct values from the table. Queries for distinct against 8.8 billion rows don’t go quickly. A few beers into a conversation with a friend and we ended up talking about work which led to a conversation about the task described above. The solution was already built in SQL Server, just needed to pull it together. The first table I needed was sys.partition_range_values. This contains one row for each range boundary value for a partition function. In my case I have a partition function which uses dayid values. For example July 4th would be represented as an int, 20130704. This table lists out all of the dayid values which were defined in the function. This eliminated the need to query my source table for distinct dayid values, everything I needed was already built in here for me. The only caveat was that in my SSIS package I needed to create a bucket for any dayid values that were out of bounds for my function. For example if my function handled 20130501 through 20130704 and I had day values of 20130401 or 20130705 in my table, these would not be listed in sys.partition_range_values. I just created an “everything else” bucket in my ssis package just in case I had any dayid values unaccounted for. To get the number of rows for a partition is very easy. The sys.partitions table contains values for each partition. Easy enough to achieve by querying for the object_id and index value of 1 (the clustered index) The final piece of information was the filegroup name. There are 2 options available to get the filegroup name, sys.data_spaces or sys.filegroups. For my query I chose sys.filegroups but really it’s a matter of preference and data needs. In order to bridge between sys.partitions table and either sys.data_spaces or sys.filegroups you need to get the container_id. This can be done by joining sys.allocation_units.container_id to the sys.partitions.hobt_id. sys.allocation_units contains the field data_space_id which then lets you join in either sys.data_spaces or sys.file_groups. The end result is the query below, which typically executes for me in under 1 second. I’ve included the join to sys.filegroups and to sys.dataspaces, and I’ve  just commented out the join sys.filegroups. As I mentioned above, this shaves a good 10-15 minutes off of my original ssis package and is a really easy tweak to get a boost in my ETL time. Enjoy.

    Read the article

  • How to Implement Single Sign-On between Websites

    - by hmloo
    Introduction Single sign-on (SSO) is a way to control access to multiple related but independent systems, a user only needs to log in once and gains access to all other systems. a lot of commercial systems that provide Single sign-on solution and you can also choose some open source solutions like Opensso, CAS etc. both of them use centralized authentication and provide more robust authentication mechanism, but if each system has its own authentication mechanism, how do we provide a seamless transition between them. Here I will show you the case. How it Works The method we’ll use is based on a secret key shared between the sites. Origin site has a method to build up a hashed authentication token with some other parameters and redirect the user to the target site. variables Status Description ssoEncode required hash(ssoSharedSecret + , + ssoTime + , + ssoUserName) ssoTime required timestamp with format YYYYMMDDHHMMSS used to prevent playback attacks ssoUserName required unique username; required when a user is logged in Note : The variables will be sent via POST for security reasons Building a Single Sign-On Solution Origin Site has function to 1. Create the URL for your Request. 2. Generate required authentication parameters 3. Redirect to target site. using System; using System.Web.Security; using System.Text; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { string postbackUrl = "http://www.targetsite.com/sso.aspx"; string ssoTime = DateTime.Now.ToString("yyyyMMddHHmmss"); string ssoUserName = User.Identity.Name; string ssoSharedSecret = "58ag;ai76"; // get this from config or similar string ssoHash = FormsAuthentication.HashPasswordForStoringInConfigFile(string.Format("{0},{1},{2}", ssoSharedSecret, ssoTime, ssoUserName), "md5"); string value = string.Format("{0}:{1},{2}", ssoHash,ssoTime, ssoUserName); Response.Clear(); StringBuilder sb = new StringBuilder(); sb.Append("<html>"); sb.AppendFormat(@"<body onload='document.forms[""form""].submit()'>"); sb.AppendFormat("<form name='form' action='{0}' method='post'>", postbackUrl); sb.AppendFormat("<input type='hidden' name='t' value='{0}'>", value); sb.Append("</form>"); sb.Append("</body>"); sb.Append("</html>"); Response.Write(sb.ToString()); Response.End(); } } Target Site has function to 1. Get authentication parameters. 2. Validate the parameters with shared secret. 3. If the user is valid, then do authenticate and redirect to target page. 4. If the user is invalid, then show errors and return. using System; using System.Web.Security; using System.Text; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { if (User.Identity.IsAuthenticated) { Response.Redirect("~/Default.aspx"); } } if (Request.Params.Get("t") != null) { string ticket = Request.Params.Get("t"); char[] delimiters = new char[] { ':', ',' }; string[] ssoVariable = ticket.Split(delimiters, StringSplitOptions.None); string ssoHash = ssoVariable[0]; string ssoTime = ssoVariable[1]; string ssoUserName = ssoVariable[2]; DateTime appTime = DateTime.MinValue; int offsetTime = 60; // get this from config or similar try { appTime = DateTime.ParseExact(ssoTime, "yyyyMMddHHmmss", null); } catch { //show error return; } if (Math.Abs(appTime.Subtract(DateTime.Now).TotalSeconds) > offsetTime) { //show error return; } bool isValid = false; string ssoSharedSecret = "58ag;ai76"; // get this from config or similar string hash = FormsAuthentication.HashPasswordForStoringInConfigFile(string.Format("{0},{1},{2}", ssoSharedSecret, ssoTime, ssoUserName), "md5"); if (string.Compare(ssoHash, hash, true) == 0) { if (Math.Abs(appTime.Subtract(DateTime.Now).TotalSeconds) > offsetTime) { //show error return; } else { isValid = true; } } if (isValid) { //Do authenticate; } else { //show error return; } } else { //show error } } } Summary This is a very simple and basic SSO solution, and its main advantage is its simplicity, only needs to add a single page to do SSO authentication, do not need to modify the existing system infrastructure.

    Read the article

  • Meet Matthijs, Dutch Inside Sales Representative for Oracle Direct

    - by Maria Sandu
    Today we would like to share some information around the Dutch Core Technology team in Malaga. Matthijs is one of the team members who decided to relocate from the Netherlands to Malaga to join Oracle Direct two years ago. Matthijs: “For the past two years I have been working as an Oracle Direct Core Technology Inside Sales representative for Named Accounts in the Netherlands, based in Malaga, Spain. In my case, working for the Dutch OD Core Technology team means that I am responsible for the Account Management of Larger companies in the Travel & Transportation and the Manufacturing, Retail & Distribution sector. I work together with the Oracle Field Account Managers and our Field Sales Management in the Netherlands where I am often the main point of contact for customers. This means that I deal with their requests and I manage their various issues, provide solutions and suggestions based on the Oracle Core Technology portfolio. I work on interesting projects with end-customers, making financial proposals and building business cases. It is a very interesting sales environment and for the last two years I improved my skills substantially. This month I will finish my Inside Sales career in Malaga to move to a position within Field Sales in the Netherlands. Oracle Direct has proven to be a great stepping stone for my career. Boost your personal development One of the reasons for joining Oracle was to boost my personal & career development. You can choose from various different trainings to follow all over Europe which enable you to reach both your personal and professional goals. Furthermore, you can decide your own career path and plan the steps necessary to achieve your goal. Many people aim to grow into Field Sales in their native countries, Business Development or Sales Management, but there are many possibilities once you decide to join Oracle. Overall, working at Oracle means working for an international company and one of the worldwide leaders in Enterprise Hardware & Software. Here you get all the tools necessary to develop yourself personally & professionally. Another great advantage of working for Oracle Direct is working from our office in Malaga, Southern Spain where we have over 400 employees from many countries across EMEA. It is a truly international environment! Working and living in Spain gives you an excellent opportunity to learn Spanish and of course enjoy the Spanish lifestyle, cuisine, beaches and much, much more!” Interview day Utrecht If you are inspired by the story of Matthijs and would like to explore the opportunity to join the Technology Sales team for the Dutch market in Malaga, let us know! We will organise an Interview day in the Oracle office in Utrecht on the 18th and 19th of September. We currently have multiple openings in the Core Technology team that focus on selling our Database portfolio in the Dutch market. We are looking for native Dutch speakers with a Bachelors degree, 2-5 years sales experience (ideally in IT) who are willing to relocate to Malaga for at least 2 years! For more information please contact [email protected] or [email protected].

    Read the article

  • Using Recursive SQL and XML trick to PIVOT(OK, concat) a "Document Folder Structure Relationship" table, works like MySQL GROUP_CONCAT

    - by Kevin Shyr
    I'm in the process of building out a Data Warehouse and encountered this issue along the way.In the environment, there is a table that stores all the folders with the individual level.  For example, if a document is created here:{App Path}\Level 1\Level 2\Level 3\{document}, then the DocumentFolder table would look like this:IDID_ParentFolderName1NULLLevel 121Level 232Level 3To my understanding, the table was built so that:Each proposal can have multiple documents stored at various locationsDifferent users working on the proposal will have different access level to the folder; if one user is assigned access to a folder level, she/he can see all the sub folders and their content.Now we understand from an application point of view why this table was built this way.  But you can quickly see the pain this causes the report writer to show a document link on the report.  I wasn't surprised to find the report query had 5 self outer joins, which is at the mercy of nobody creating a document that is buried 6 levels deep, and not to mention the degradation in performance.With the help of 2 posts (at the end of this post), I was able to come up with this solution:Use recursive SQL to build out the folder pathUse SQL XML trick to concat the strings.Code (a reminder, I built this code in a stored procedure.  If you copy the syntax into a simple query window and execute, you'll get an incorrect syntax error) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} -- Get all folders and group them by the original DocumentFolderID in PTSDocument table;WITH DocFoldersByDocFolderID(PTSDocumentFolderID_Original, PTSDocumentFolderID_Parent, sDocumentFolder, nLevel)AS (-- first member      SELECT 'PTSDocumentFolderID_Original' = d1.PTSDocumentFolderID            , PTSDocumentFolderID_Parent            , 'sDocumentFolder' = sName            , 'nLevel' = CONVERT(INT, 1000000)      FROM (SELECT DISTINCT PTSDocumentFolderID                  FROM dbo.PTSDocument_DY WITH(READPAST)            ) AS d1            INNER JOIN dbo.PTSDocumentFolder_DY AS df1 WITH(READPAST)                  ON d1.PTSDocumentFolderID = df1.PTSDocumentFolderID      UNION ALL      -- recursive      SELECT ddf1.PTSDocumentFolderID_Original            , df1.PTSDocumentFolderID_Parent            , 'sDocumentFolder' = df1.sName            , 'nLevel' = ddf1.nLevel - 1      FROM dbo.PTSDocumentFolder_DY AS df1 WITH(READPAST)            INNER JOIN DocFoldersByDocFolderID AS ddf1                  ON df1.PTSDocumentFolderID = ddf1.PTSDocumentFolderID_Parent)-- Flatten out folder path, DocFolderSingleByDocFolderID(PTSDocumentFolderID_Original, sDocumentFolder)AS (SELECT dfbdf.PTSDocumentFolderID_Original            , 'sDocumentFolder' = STUFF((SELECT '\' + sDocumentFolder                                         FROM DocFoldersByDocFolderID                                         WHERE (PTSDocumentFolderID_Original = dfbdf.PTSDocumentFolderID_Original)                                         ORDER BY PTSDocumentFolderID_Original, nLevel                                         FOR XML PATH ('')),1,1,'')      FROM DocFoldersByDocFolderID AS dfbdf      GROUP BY dfbdf.PTSDocumentFolderID_Original) And voila, I use the second CTE to join back to my original query (which is now a CTE for Source as we can now use MERGE to do INSERT and UPDATE at the same time).Each part of this solution would not solve the problem by itself because:If I don't use recursion, I cannot build out the path properly.  If I use the XML trick only, then I don't have the originating folder ID info that I need to link to the document.If I don't use the XML trick, then I don't have one row per document to show in the report.I could conceivably do this in the report function, but I'd rather not deal with the beginning or ending backslash and how to attach the document name.PIVOT doesn't do strings and UNPIVOT runs into the same problem as the above.I'm excited that each version of SQL server provides us new tools to solve old problems and/or enables us to solve problems in a more elegant wayThe 2 posts that helped me along:Recursive Queries Using Common Table ExpressionHow to use GROUP BY to concatenate strings in SQL server?

    Read the article

  • entity framework 4 POCO's stored procedure error - "The FunctionImport could not be found in the container"

    - by user331884
    Entity Framework with POCO Entities generated by T4 template. Added Function Import named it "procFindNumber" specified complex collection named it "NumberResult". Here's what got generated in Context.cs file: public ObjectResult<NumberResult> procFindNumber(string lookupvalue) { ObjectParameter lookupvalueParameter; if (lookupvalue != null) { lookupvalueParameter = new ObjectParameter("lookupvalue", lookupvalue); } else { lookupvalueParameter = new ObjectParameter("lookupvalue", typeof(string)); } return base.ExecuteFunction<NumberResult>("procFindNumber", lookupvalueParameter); } Here's the stored procedure: ALTER PROCEDURE [dbo].[procFindNumber] @lookupvalue varchar(255) AS BEGIN SET NOCOUNT ON; DECLARE @sql nvarchar(MAX); IF @lookupvalue IS NOT NULL AND @lookupvalue <> '' BEGIN SELECT @sql = 'SELECT dbo.HBM_CLIENT.CLIENT_CODE, dbo.HBM_MATTER.MATTER_NAME, dbo.HBM_MATTER.CLIENT_MAT_NAME FROM dbo.HBM_MATTER INNER JOIN dbo.HBM_CLIENT ON dbo.HBM_MATTER.CLIENT_CODE = dbo.HBM_CLIENT.CLIENT_CODE LEFT OUTER JOIN dbo.HBL_CLNT_CAT ON dbo.HBM_CLIENT.CLNT_CAT_CODE = dbo.HBL_CLNT_CAT.CLNT_CAT_CODE LEFT OUTER JOIN dbo.HBL_CLNT_TYPE ON dbo.HBM_CLIENT.CLNT_TYPE_CODE = dbo.HBL_CLNT_TYPE.CLNT_TYPE_CODE WHERE (LTRIM(RTRIM(dbo.HBM_MATTER.CLIENT_CODE)) <> '''')' SELECT @sql = @sql + ' AND (dbo.HBM_MATTER.MATTER_NAME like ''%' + @lookupvalue + '%'')' SELECT @sql = @sql + ' OR (dbo.HBM_MATTER.CLIENT_MAT_NAME like ''%' + @lookupvalue + '%'')' SELECT @sql = @sql + ' ORDER BY dbo.HBM_MATTER.MATTER_NAME' -- Execute the SQL query EXEC sp_executesql @sql END END In my WCF service I try to execute the stored procedure: [WebGet(UriTemplate = "number/{value}/?format={format}")] public IEnumerable<NumberResult> GetNumber(string value, string format) { if (string.Equals("json", format, StringComparison.OrdinalIgnoreCase)) { WebOperationContext.Current.OutgoingResponse.Format = WebMessageFormat.Json; } using (var ctx = new MyEntities()) { ctx.ContextOptions.ProxyCreationEnabled = false; var results = ctx.procFindNumber(value); return results.ToList(); } } Error message says "The FunctionImport ... could not be found in the container ..." What am I doing wrong?

    Read the article

  • Slow query with unexpected index scan

    - by zerkms
    Hello I have this query: SELECT * FROM sample INNER JOIN test ON sample.sample_number = test.sample_number INNER JOIN result ON test.test_number = result.test_number WHERE sampled_date BETWEEN '2010-03-17 09:00' AND '2010-03-17 12:00' the biggest table here is RESULT, contains 11.1M records. The left 2 tables about 1M. this query works slowly (more than 10 minutes) and returns about 800 records. executing plan shows clustered index scan (over it's PRIMARY KEY (result.result_number, which actually doesn't take part in query)) over all 11M records. RESULT.TEST_NUMBER is a clustered primary key. if I change 2010-03-17 09:00 to 2010-03-17 10:00 - i get about 40 records. it executes for 300ms. and plan shows index seek (over result.test_number index) if i replace * in SELECT clause to result.test_number (covered with index) - then all become fast in first case too. this points to hdd IO issues, but doesn't clarifies changing plan. so, any ideas? UPDATE: sampled_date is in table sample and covered by index. other fields from this query: test.sample_number is covered by index and result.test_number too. UPDATE 2: obviously than sql server in any reasons don't want to use index. i did a small experiment: i remove INNER JOIN with result, select all test.test_number and after that do SELECT * FROM RESULT WHERE TEST_NUMBER IN (...) this, of course, works fast. but i cannot get what is the difference and why query optimizer choose such inappropriate way to select data in 1st case.

    Read the article

  • Django queryset not returning the same values as the generated sql

    - by HRCerqueira
    Hello guys, I have the following queryset: subscribers = User.objects.values('email', 'username').filter( Q(subscription_settings__new_question='i') | Q(subscription_settings__new_question_watched_tags='i', marked_tags__id__in=question.tags.values('id'), tag_selections__reason='good') ).exclude(id=question.author.id) The problem is that when I evaluate the query I get only the values that are filtered by the first Q object (even if I reverse the order of the objects). So lets say that I was expecting the user A, B, C and D, where A and B are filtered by the first Q object and C and D by the second. But the queryset only returns A and B. I used the django debug toolbar to see the query that was actually being executed (and then I used a direct print statement like "print subscriber.query.as_sql()" just to be sure) and then evaluated the query directly using psql (I'm using postgres by the way), and I get the results I expect. Here's the query btw: SELECT "auth_user"."email", "auth_user"."username" FROM "auth_user" LEFT OUTER JOIN "forum_markedtag" ON ("auth_user"."id" = "forum_markedtag"."user_id") INNER JOIN "forum_defaultsubscriptionsetting" ON ("auth_user"."id" = "forum_defaultsubscriptionsetting"."user_id") WHERE ((("forum_markedtag"."reason" = E'good' AND "forum_defaultsubscriptionsetting"."new_question_watched_tags" = E'i' AND "forum_markedtag"."tag_id" IN (SELECT U0."id" FROM "tag" U0 INNER JOIN "question_tags" U1 ON (U0."id" = U1."tag_id") WHERE U1."question_id" = 64 )) OR "forum_defaultsubscriptionsetting"."new_question" = E'i' ) AND NOT ("auth_user"."id" = 10 )) Thanks in advance EDIT: I tried to break the queryset into two, one that uses the first Q object as the filter and another one with the second Q object, and although the second queryset produces the correct sql that returns the correct values when evaluated directly, it still doesn't return nothing when evaluated as a django queryset. heres the alternative code: subscribers = User.objects.values('email', 'username').filter( subscription_settings__new_question='i').exclude(id=question.author.id) subscribers2 = User.objects.values('email', 'username').filter( subscription_settings__new_question_watched_tags='i', marked_tags__id__in=question.tags.values('id'), tag_selections__reason='good').exclude(id=question.author.id)

    Read the article

  • Rails Fixtures Don't Seem to Support Transitive Associations

    - by Rick Wayne
    So I've got 3 models in my app, connected by HABTM relations: class Member < ActiveRecord::Base has_and_belongs_to_many :groups end class Group < ActiveRecord::Base has_and_belongs_to_many :software_instances end class SoftwareInstance < ActiveRecord::Base end And I have the appropriate join tables, and use foxy fixtures to load them: -- members.yml -- rick: name: Rick groups: cals -- groups.yml -- cals: name: CALS software_instances: delphi -- software_instances.yml -- delphi: name: No, this is not a joke, someone in our group is still maintaining Delphi apps... And I do a rake db:fixtures:load, and examine the tables. Yep, the join tables groups_members and groups_software_instances have appropriate IDs in, big numbers generated by hashing the fixture names. Yay! And if I run script/console, I can look at rick's groups and see that cals is in there, or cals's software_instances and delphi shows up. (Likewise delphi knows that it's in the group cals.) Yay! But if I look at rick.groups[0].software_instances...nada. []. And, oddly enough, rick.groups[0].id is something like 30 or 10, not 398398439. I've tried swapping around where the association is defined in the fixtures, e.g. -- members.yml -- rick name: rick -- groups.yml -- cals name: CALS members: rick No error, but same result. (Later: Confirmed, same behavior on MySQL. In fact I get the same thing if I take the foxification out and use ERB to directly create the join tables, so: -- license_groups_software_instances -- cals_delphi: group_id: <%= Fixtures.identify 'cals' % software_instance_id: <%= Fixtures.identify 'delphi' $ Before I chuck it all, reject foxy fixtures and all who sail in her and just hand-code IDs...anybody got a suggestion as to why this is happening? I'm currently working in Rails 2.3.2 with sqlite3, this occurs on both OS X and Ubuntu (and I think with MySQL too). TIA, my compadres.

    Read the article

  • How do I break down MySQL query results into categories, each with a specific number of rows?

    - by Mel
    Hello, Problem: I want to list n number of games from each genre (order not important) The following MySQL query resides inside a ColdFusion function. It is meant to list all games under a platform (for example, list all PS3 games; list all Xbox 360 games; etc...). The variable for PlatformID is passed through the URL. I have 9 genres, and I would like to list 10 games from each genre. SELECT games.GameID AS GameID, games.GameReleaseDate AS rDate, titles.TitleName AS tName, titles.TitleShortDescription AS sDesc, genres.GenreName AS gName, platforms.PlatformID, platforms.PlatformName AS pName, platforms.PlatformAbbreviation AS pAbbr FROM (((games join titles on((games.TitleID = titles.TitleID))) join genres on((genres.GenreID = games.GenreID))) join platforms on((platforms.PlatformID = games.PlatformID))) WHERE (games.PlatformID = '#ARGUMENTS.PlatformID#') ORDER BY GenreName ASC, GameReleaseDate DESC Once the query results come back I group them in ColdFusion as follows: <cfoutput query="ListGames" group="gName"> (first loop which lists genres) #ListGames.gName# <cfoutput> (nested loop which lists games) #ListGames.tName# </cfoutput> </cfoutput> The problem is that I only want 10 games from each genre to be listed. If I place a "limit" of 50 in the SQL, I will get ~ 50 games of the same genre (depending on how much games of that genre there are). The second issue is I don't want the overload of querying the database for all games when each person will only look at a few. What is the correct way to do this? Many thanks!

    Read the article

  • T-SQL Right Joins to ALL Entries inc Selected Column

    - by Pace
    Hi Experts, I have the following Query which produces the output below; SELECT TBLUSERS.USERID, TBLUSERS.ADusername, TBLACCESSLEVELS.ACCESSLEVELID, TBLACCESSLEVELS.AccessLevelName FROM TBLACCESSLEVELS INNER JOIN TBLACCESSRIGHTS ON TBLACCESSLEVELS.ACCESSLEVELID = TBLACCESSRIGHTS.ACCESSLEVELID INNER JOIN TBLUSERS ON TBLACCESSRIGHTS.USERID = TBLUSERS.USERID The output is this; 29 administrator 1 AllUsers 29 administrator 2 JobQueue 29 administrator 3 Telephone Directory Admin 29 administrator 4 Jobqueueadmin 29 administrator 5 UserAdmin 29 administrator 6 Product System 27 alan 1 AllUsers 97 andy 1 AllUsers 26 barry 1 AllUsers 26 barry 2 JobQueue 26 barry 3 Telephone Directory Admin 26 barry 4 Jobqueueadmin 26 barry 5 UserAdmin 26 barry 6 Product System 26 barry 7 Newseditor 26 barry 8 GreetingBoard What I would like to do is modify the query so I get all Access Levels regardless of weather there is an entry for that user. What I would also like to do is some sort of exist case so that I get output like the following; 29 administrator 1 AllUsers True 29 administrator 2 JobQueue True 29 administrator 3 Telephone Directory Admin True 29 administrator 4 Jobqueueadmin True 29 administrator 5 UserAdmin True 29 administrator 6 Product System True 29 administrator 7 Newseditor False 29 administrator 8 GreetingBoard False 27 alan 1 AllUsers True 27 alan 2 JobQueue False 27 alan 3 Telephone Directory Admin False 27 alan 4 Jobqueueadmin False 27 alan 5 UserAdmin False 27 alan 6 Product System False 27 alan 7 Newseditor False 27 alan 8 GreetingBoard False 97 andy 1 AllUsers True 97 andy 2 JobQueue False 97 andy 3 Telephone Directory Admin False 97 andy 4 Jobqueueadmin False 97 andy 5 UserAdmin False 97 andy 6 Product System False 97 andy 7 Newseditor False 97 andy 8 GreetingBoard False 26 Barry 1 AllUsers True 26 Barry 2 JobQueue True 26 Barry 3 Telephone Directory Admin True 26 Barry 4 Jobqueueadmin True 26 Barry 5 UserAdmin True 26 Barry 6 Product System True 26 Barry 7 Newseditor True 26 Barry 8 GreetingBoard True ......................................... So the rules are ALWAYS show ALL Entries for ACCESSLEVELS and where EXISTS in ACCESSRIGHTS produce a true / false to show this. I hope this makes sense and hopefully you dont need the table definitions as everything I need to work with is in the original Query. I just need a way of manipulating it slightly and getting the join in the right place. Thank you. Pace

    Read the article

  • MS SQL SELECT stored procedure according to combobox.selectedvalue

    - by Jay
    Hello, In order to fill a datagridview according to the selectedvalue of a combobox I've tried creating a stored procedure. However, as I'm not 100% sure what I'm doing it, depending on the WHERE statement at the end of my stored procedure, either returns everything within the table or nothing at all. This is what's in my class: Public Function GetAankoopDetails(ByRef DisplayMember As String, ByRef ValueMember As String) As DataTable DisplayMember = "AankoopDetailsID" ValueMember = "AankoopDetailsID" If DS.Tables.Count > 0 Then DS.Tables.Remove(DT) End If DT = DAC.ExecuteDataTable(My.Resources.S_AankoopDetails, _Result, _ DAC.Parameter(Const_AankoopID, AankoopID), _ DAC.Parameter("@ReturnValue", 0)) DS.Tables.Add(DT) Return DT End Function Public Function GetAankoopDetails() As DataTable If DS.Tables.Count > 0 Then DS.Tables.Remove(DT) End If DT = DAC.ExecuteDataTable(My.Resources.S_AankoopDetails, _Result, _ DAC.Parameter(Const_AankoopID, AankoopID), _ DAC.Parameter("@ReturnValue", 0)) DS.Tables.Add(DT) Return DT End Function This is the function in the code behind the form I've written in order to fill the datagridview: Private Sub GridAankoopDetails_Fill() Try Me.Cursor = Cursors.WaitCursor dgvAankoopDetails.DataSource = Nothing _clsAankoopDetails.AankoopDetailsID = cboKeuze.SelectedValue dgvAankoopDetails.DataSource = _clsAankoopDetails.GetAankoopDetails Catch ex As Exception MessageBox.Show("An error occurred while trying to fill the data grid: " & ex.Message, "Oops!", MessageBoxButtons.OK) Finally Me.Cursor = Cursors.Default End Try End Sub And finally, this is my stored procedure: (do note that I'm not sure what I'm doing here) USE [Budget] GO /****** Object: StoredProcedure [dbo].[S_AankoopDetails] Script Date: 04/12/2010 03:10:52 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[S_AankoopDetails] ( @AankoopID int, @ReturnValue int output ) AS declare @Value int set @Value =@@rowcount if @Value = 0 begin SELECT dbo.tblAankoopDetails.AankoopDetailsID, dbo.tblAankoopDetails.AankoopID, dbo.tblAankoopDetails.ArtikelID, dbo.tblAankoopDetails.Aantal, dbo.tblAankoopDetails.Prijs, dbo.tblAankoopDetails.Korting, dbo.tblAankoopDetails.SoortKorting, dbo.tblAankoopDetails.UitgavenDeelGroepID FROM dbo.tblAankoopDetails INNER JOIN dbo.tblAankoop ON dbo.tblAankoopDetails.AankoopID = dbo.tblAankoop.AankoopID INNER JOIN dbo.tblArtikel ON dbo.tblAankoopDetails.ArtikelID = dbo.tblArtikel.ArtikelID INNER JOIN dbo.tblUitgavenDeelGroep ON dbo.tblAankoopDetails.UitgavenDeelGroepID = dbo.tblUitgavenDeelGroep.UitgavenDeelGroepID WHERE dbo.tblAankoopDetails.Deleted = 0 and dbo.tblAankoopDetails.AankoopID = @AankoopID ORDER BY AankoopID if @@rowcount >0 begin set @ReturnValue=999 end else begin set @ReturnValue=997 end end if @Value >0 begin --Dit wil zeggen dat ik een gebruiker wil ingeven die reeds bestaat. (998) set @ReturnValue=998 end Does anyone know what I'm need to do to resolve this? Kind regards, Jay

    Read the article

  • Displaying Many-To-Many Database relationship in VB.NET 2008 with DataGrid, MS SQL 2008

    - by user337501
    Computer bombed while posting this, couldnt find a duplicate question but if there is one, forgive me. So, I've run into a wall. And rather than use a ladder to avoid it, I'd like go through it. I'm setting up what I can best describe as a many-to-many relationship in a database. To examplify, imagine I have three primary tables: Items, Categories, Sections(nevermind the potential redundancy) Then I have another table, Properties. Items, Categories, and Sections can be associated with many properties. A single property can be associated with one, all, or none of the other tables. The best way I can figure to do this is to have join tables make the relationship. i.e. tblItems----(Foreign Key)----tblItems_To_Properties----(Foreign Key)----tblProperties In this example, tblItems simply has an "ItemID" Primary Key. tblItems_To_Properties has its own Primary Key(tblItems_To_PropertiesID), a Foreign Key to the Item(ItemID) and a Foreign key to the Property(PropertyID). The Properties table simply has its primary key(PropertyID) I hope this example isnt too confusing...if I have to I can find a way to put a diagram up or something. My problem is, I want to display this in a DataGrid using the Master-Detail method(DevExpress GridControl). I use the tblItems as a test, and I can see the Items in the parent view, but in the child view I see(understandably) the join table and that is it. My goal is to make it so the Grid ignores the join table and shows the Properties table as the only child. Any help on this method or insight into another solution would be muuuuuuuch appreciat

    Read the article

  • JPA ManyToMany referencing issue

    - by dharga
    I have three tables. AvailableOptions and PlanTypeRef with a ManyToMany association table called AvailOptionPlanTypeAssoc. The trimmed down schemas look like this CREATE TABLE [dbo].[AvailableOptions]( [SourceApplication] [char](8) NOT NULL, [OptionId] [int] IDENTITY(1,1) NOT NULL, ... ) CREATE TABLE [dbo].[AvailOptionPlanTypeAssoc]( [SourceApplication] [char](8) NOT NULL, [OptionId] [int] NOT NULL, [PlanTypeCd] [char](2) NOT NULL, ) CREATE TABLE [dbo].[PlanTypeRef]( [PlanTypeCd] [char](2) NOT NULL, [PlanTypeDesc] [varchar](32) NOT NULL, ) And the Java code looks like this. //AvailableOption.java @ManyToMany(cascade={CascadeType.ALL}, fetch=FetchType.EAGER) @JoinTable( name = "AvailOptionPlanTypeAssoc", joinColumns = { @JoinColumn(name = "OptionId"), @JoinColumn(name="SourceApplication")}, inverseJoinColumns=@JoinColumn(name="PlanTypeCd")) List<PlanType> planTypes = new ArrayList<PlanType>(); //PlanType.java @ManyToMany @JoinTable( name = "AvailOptionPlanTypeAssoc", joinColumns = { @JoinColumn(name = "PlanTypeCd")}, inverseJoinColumns={@JoinColumn(name="OptionId"), @JoinColumn(name="SourceApplication")}) List<AvailableOption> options = new ArrayList<AvailableOption>(); The problem arises when making a select on AvailableOptions it joins back onto itself. Note the following SQL code from the backtrace. The second inner join should be on PlanTypeRef. SELECT t0.OptionId, t0.SourceApplication, t2.PlanTypeCd, t2.EffectiveDate, t2.PlanTypeDesc, t2.SysLstTrxDtm, t2.SysLstUpdtUserId, t2.TermDate FROM dbo.AvailableOptions t0 INNER JOIN dbo.AvailOptionPlanTypeAssoc t1 ON t0.OptionId = t1.OptionId AND t0.SourceApplication = t1.SourceApplication INNER JOIN dbo.AvailableOptions t2 ON t1.PlanTypeCd = t2.PlanTypeCd WHERE (t0.SourceApplication = ? AND t0.OptionType = ?) ORDER BY t0.OptionId ASC, t0.SourceApplication ASC [params=(String) testApp, (String) junit0]}

    Read the article

  • Jenkins slave jobs failing on "Unexpected termination of channel"

    - by Clark Wright
    I am currently seeing a set of errors across my builds. Is this expected behaviour if you loose Jenkins (say to a box crash, or a kill -9)? Or is there something worse going on (like a bad network connection)? The stack and error is: FATAL: hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected termination of the channel hudson.remoting.RequestAbortedException: hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected termination of the channel at hudson.remoting.Request.call(Request.java:149) at hudson.remoting.Channel.call(Channel.java:681) at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:158) at $Proxy175.join(Unknown Source) at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:861) at hudson.Launcher$ProcStarter.join(Launcher.java:345) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:82) at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:58) at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19) at hudson.model.AbstractBuild$AbstractRunner.perform(AbstractBuild.java:703) at hudson.model.Build$RunnerImpl.build(Build.java:178) at hudson.model.Build$RunnerImpl.doRun(Build.java:139) at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:473) at hudson.model.Run.run(Run.java:1410) at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46) at hudson.model.ResourceController.execute(ResourceController.java:88) at hudson.model.Executor.run(Executor.java:238) Caused by: hudson.remoting.RequestAbortedException: java.io.IOException: Unexpected termination of the channel at hudson.remoting.Request.abort(Request.java:273) at hudson.remoting.Channel.terminate(Channel.java:732) at hudson.remoting.Channel$ReaderThread.run(Channel.java:1157) Caused by: java.io.IOException: Unexpected termination of the channel at hudson.remoting.Channel$ReaderThread.run(Channel.java:1133) Caused by: java.io.EOFException at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2554) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1297) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:351) at hudson.remoting.Channel$ReaderThread.run(Channel.java:1127)

    Read the article

  • Optimizing code using PIL

    - by freakazo
    Firstly sorry for the long piece of code pasted below. This is my first time actually having to worry about performance of an application so I haven't really ever worried about performance. This piece of code pretty much searches for an image inside another image, it takes 30 seconds to run on my computer, converting the images to greyscale and other changes shaved of 15 seconds, I need another 15 shaved off. I did read a bunch of pages and looked at examples but I couldn't find the same problems in my code. So any help would be greatly appreciated. From the looks of it (cProfile) 25 seconds is spent within the Image module, and only 5 seconds in my code. from PIL import Image import os, ImageGrab, pdb, time, win32api, win32con import cProfile def GetImage(name): name = name + '.bmp' try: print(os.path.join(os.getcwd(),"Images",name)) image = Image.open(os.path.join(os.getcwd(),"Images",name)) except: print('error opening image;', name) return image def Find(name): image = GetImage(name) imagebbox = image.getbbox() screen = ImageGrab.grab() #screen = Image.open(os.path.join(os.getcwd(),"Images","Untitled.bmp")) YLimit = screen.getbbox()[3] - imagebbox[3] XLimit = screen.getbbox()[2] - imagebbox[2] image = image.convert("L") Screen = screen.convert("L") Screen.load() image.load() #print(XLimit, YLimit) Found = False image = image.getdata() for y in range(0,YLimit): for x in range(0,XLimit): BoxCoordinates = x, y, x+imagebbox[2], y+imagebbox[3] ScreenGrab = screen.crop(BoxCoordinates) ScreenGrab = ScreenGrab.getdata() if image == ScreenGrab: Found = True #print("woop") return x,y if Found == False: return "Not Found" cProfile.run('print(Find("Login"))')

    Read the article

  • Nhibernate/Hibernate, lookup tables and object design

    - by Simon G
    Hi, I've got two tables. Invoice with columns CustomerID, InvoiceDate, Value, InvoiceTypeID (CustomerID and InvoiceDate make up a composite key) and InvoiceType with InvoiceTypeID and InvoiceTypeName columns. I know I can create my objects like: public class Invoice { public virtual int CustomerID { get; set; } public virtual DateTime InvoiceDate { get; set; } public virtual decimal Value { get; set; } public virtual InvoiceType InvoiceType { get; set; } } public class InvoiceType { public virtual InvoiceTypeID { get; set; } public virtual InvoiceTypeName { get; set; } } So the generated sql would look something like: SELECT CustomerID, InvoiceDate, Value, InvoiceTypeID FROM Invoice WHERE CustomerID = x AND InvoiceDate = y SELECT InvoiceTypeID, InvoiceTypeName FROM InvoiceType WHERE InvoiceTypeID = z But rather that having two select queries executed to retrieve the data I would rather have one. I would also like to avoid using child object for simple lookup lists. So my object would look something like: public class Invoice { public virtual int CustomerID { get; set; } public virtual DateTime InvoiceDate { get; set; } public virtual decimal Value { get; set; } public virtual InvoiceTypeID { get; set; } public virtual InvoiceTypeName { get; set; } } And my sql would look something like: SELECT CustomerID, InvoiceDate, Value, InvoiceTypeID FROM Invoice INNER JOIN InvoiceType ON Invoice.InvoiceTypeID = InvoiceType.InvoiceTypeID WHERE CustomerID = x AND InvoiceDate = y My question is how do I create the mapping for this? I've tried using join but this tried to join using CustomerID and InvoiceDate, am I missing something obvious? Thanks

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >