Analytic functions – they’re not aggregates

Posted by Rob Farley on SQL Blog See other posts from SQL Blog or by Rob Farley
Published on Tue, 10 Apr 2012 01:53:49 GMT Indexed on 2012/04/10 5:37 UTC
Read the original article Hit count: 879

SQL 2012 brings us a bunch of new analytic functions, together with enhancements to the OVER clause. People who have known me over the years will remember that I’m a big fan of the OVER clause and the types of things that it brings us when applied to aggregate functions, as well as the ranking functions that it enables.

The OVER clause was introduced in SQL Server 2005, and remained frustratingly unchanged until SQL Server 2012.TSQL2sDay150x150

This post is going to look at a particular aspect of the analytic functions though (not the enhancements to the OVER clause). When I give presentations about the analytic functions around Australia as part of the tour of SQL Saturdays (starting in Brisbane this Thursday), and in Chicago next month, I’ll make sure it’s sufficiently well described. But for this post – I’m going to skip that and assume you get it.

The analytic functions introduced in SQL 2012 seem to come in pairs – FIRST_VALUE and LAST_VALUE, LAG and LEAD, CUME_DIST and PERCENT_RANK, PERCENTILE_CONT and PERCENTILE_DISC. Perhaps frustratingly, they take slightly different forms as well. The ones I want to look at now are FIRST_VALUE and LAST_VALUE, and PERCENTILE_CONT and PERCENTILE_DISC.

The reason I’m pulling this ones out is that they always produce the same result within their partitions (if you’re applying them to the whole partition).

Consider the following query:

SELECT
    YEAR(OrderDate),
    FIRST_VALUE(TotalDue)
        OVER (PARTITION BY YEAR(OrderDate)
              ORDER BY OrderDate, SalesOrderID
              RANGE BETWEEN UNBOUNDED PRECEDING
                        AND UNBOUNDED FOLLOWING),
    LAST_VALUE(TotalDue)
        OVER (PARTITION BY YEAR(OrderDate)
              ORDER BY OrderDate, SalesOrderID
              RANGE BETWEEN UNBOUNDED PRECEDING
                        AND UNBOUNDED FOLLOWING),
    PERCENTILE_CONT(0.95)
        WITHIN GROUP (ORDER BY TotalDue)
        OVER (PARTITION BY YEAR(OrderDate)),
    PERCENTILE_DISC(0.95)
        WITHIN GROUP (ORDER BY TotalDue)
        OVER (PARTITION BY YEAR(OrderDate))
FROM Sales.SalesOrderHeader
;

This is designed to get the TotalDue for the first order of the year, the last order of the year, and also the 95% percentile, using both the continuous and discrete methods (‘discrete’ means it picks the closest one from the values available – ‘continuous’ means it will happily use something between, similar to what you would do for a traditional median of four values). I’m sure you can imagine the results – a different value for each field, but within each year, all the rows the same.

Notice that I’m not grouping by the year. Nor am I filtering. This query gives us a result for every row in the SalesOrderHeader table – 31465 in this case (using the original AdventureWorks that dates back to the SQL 2005 days).

The RANGE BETWEEN bit in FIRST_VALUE and LAST_VALUE is needed to make sure that we’re considering all the rows available. If we don’t specify that, it assumes we only mean “RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW”, which means that LAST_VALUE ends up being the row we’re looking at.

At this point you might think about other environments such as Access or Reporting Services, and remember aggregate functions like FIRST. We really should be able to do something like:

SELECT
    YEAR(OrderDate),
    FIRST_VALUE(TotalDue)
        OVER (PARTITION BY YEAR(OrderDate)
              ORDER BY OrderDate, SalesOrderID
              RANGE BETWEEN UNBOUNDED PRECEDING
                        AND UNBOUNDED FOLLOWING)
FROM Sales.SalesOrderHeader
GROUP BY YEAR(OrderDate)
;

But you can’t. You get that age-old error:

Msg 8120, Level 16, State 1, Line 5
Column 'Sales.SalesOrderHeader.OrderDate' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
Msg 8120, Level 16, State 1, Line 5
Column 'Sales.SalesOrderHeader.SalesOrderID' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.

Hmm.

You see, FIRST_VALUE isn’t an aggregate function. None of these analytic functions are. There are too many things involved for SQL to realise that the values produced might be identical within the group.

Furthermore, you can’t even surround it in a MAX. Then you get a different error, telling you that you can’t use windowed functions in the context of an aggregate.

And so we end up grouping by doing a DISTINCT.

SELECT DISTINCT
    YEAR(OrderDate),
        FIRST_VALUE(TotalDue) 
            OVER (PARTITION BY YEAR(OrderDate)
                  ORDER BY OrderDate, SalesOrderID
                  RANGE BETWEEN UNBOUNDED PRECEDING
                            AND UNBOUNDED FOLLOWING),
        LAST_VALUE(TotalDue)
            OVER (PARTITION BY YEAR(OrderDate)
                  ORDER BY OrderDate, SalesOrderID
                  RANGE BETWEEN UNBOUNDED PRECEDING
                            AND UNBOUNDED FOLLOWING),
    PERCENTILE_CONT(0.95) 
        WITHIN GROUP (ORDER BY TotalDue)
        OVER (PARTITION BY YEAR(OrderDate)),
    PERCENTILE_DISC(0.95)
        WITHIN GROUP (ORDER BY TotalDue)
        OVER (PARTITION BY YEAR(OrderDate))
FROM Sales.SalesOrderHeader
;

image

I’m sorry. It’s just the way it goes. Hopefully it’ll change the future, but for now, it’s what you’ll have to do.

If we look in the execution plan, we see that it’s incredibly ugly, and actually works out the results of these analytic functions for all 31465 rows, finally performing the distinct operation to convert it into the four rows we get in the results.

image

You might be able to achieve a better plan using things like TOP, or the kind of calculation that I used in http://sqlblog.com/blogs/rob_farley/archive/2011/08/23/t-sql-thoughts-about-the-95th-percentile.aspx (which is how PERCENTILE_CONT works), but it’s definitely convenient to use these functions, and in time, I’m sure we’ll see good improvements in the way that they are implemented.

Oh, and this post should be good for fellow SQL Server MVP Nigel Sammy’s T-SQL Tuesday this month.

© SQL Blog or respective owner

Related posts about sql

Related posts about sql improvements