Developing Schema Compare for Oracle (Part 6): 9i Query Performance

Posted by Simon Cooper on Simple Talk See other posts from Simple Talk or by Simon Cooper
Published on Thu, 03 Jun 2010 10:30:00 GMT Indexed on 2010/06/03 10:37 UTC
Read the original article Hit count: 280

Filed under:

All throughout the EAP and beta versions of Schema Compare for Oracle, our main request was support for Oracle 9i. After releasing version 1.0 with support for 10g and 11g, our next step was then to get version 1.1 of SCfO out with support for 9i. However, there were some significant problems that we had to overcome first. This post will concentrate on query execution time.

When we first tested SCfO on a 9i server, after accounting for various changes to the data dictionary, we found that database registration was taking a long time. And I mean a looooooong time. The same database that on 10g or 11g would take a couple of minutes to register would be taking upwards of 30 mins on 9i. Obviously, this is not ideal, so a poke around the query execution plans was required.

As an example, let's take the table population query - the one that reads ALL_TABLES and joins it with a few other dictionary views to get us back our list of tables. On 10g, this query takes 5.6 seconds. On 9i, it takes 89.47 seconds. The difference in execution plan is even more dramatic - here's the (edited) execution plan on 10g:

-------------------------------------------------------------------------------
| Id | Operation | Name | Bytes | Cost |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 108K| 939 |
| 1 | SORT ORDER BY | | 108K| 939 |
| 2 | NESTED LOOPS OUTER | | 108K| 938 |
|* 3 | HASH JOIN RIGHT OUTER | | 103K| 762 |
| 4 | VIEW | ALL_EXTERNAL_LOCATIONS | 2058 | 3 |
|* 20 | HASH JOIN RIGHT OUTER | | 73472 | 759 |
| 21 | VIEW | ALL_EXTERNAL_TABLES | 2097 | 3 |
|* 34 | HASH JOIN RIGHT OUTER | | 39920 | 755 |
| 35 | VIEW | ALL_MVIEWS | 51 | 7 |
| 58 | NESTED LOOPS OUTER | | 39104 | 748 |
| 59 | VIEW | ALL_TABLES | 6704 | 668 |
| 89 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2025 | 5 |
| 106 | VIEW | ALL_PART_TABLES | 277 | 11 |
-------------------------------------------------------------------------------

And the same query on 9i:

-------------------------------------------------------------------------------
| Id | Operation | Name | Bytes | Cost |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 16P| 55G|
| 1 | SORT ORDER BY | | 16P| 55G|
| 2 | NESTED LOOPS OUTER | | 16P| 862M|
| 3 | NESTED LOOPS OUTER | | 5251G| 992K|
| 4 | NESTED LOOPS OUTER | | 4243M| 2578 |
| 5 | NESTED LOOPS OUTER | | 2669K| 1440 |
|* 6 | HASH JOIN OUTER | | 398K| 302 |
| 7 | VIEW | ALL_TABLES | 342K| 276 |
| 29 | VIEW | ALL_MVIEWS | 51 | 20 |
|* 50 | VIEW PUSHED PREDICATE | ALL_TAB_COMMENTS | 2043 | |
|* 66 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_TABLES | 1777K| |
|* 80 | VIEW PUSHED PREDICATE | ALL_EXTERNAL_LOCATIONS | 1744K| |
|* 96 | VIEW | ALL_PART_TABLES | 852K| |
-------------------------------------------------------------------------------

Have a look at the cost column. 10g's overall query cost is 939, and 9i is 55,000,000,000 (or more precisely, 55,496,472,769). It's also having to process far more data. What on earth could be causing this huge difference in query cost?

After trawling through the '10g New Features' documentation, we found item 1.9.2.21. Before 10g, Oracle advised that you do not collect statistics on data dictionary objects. From 10g, it advised that you do collect statistics on the data dictionary; for our queries, Oracle therefore knows what sort of data is in the dictionary tables, and so can generate an efficient execution plan. On 9i, no statistics are present on the system tables, so Oracle has to use the Rule Based Optimizer, which turns most LEFT JOINs into nested loops. If we force 9i to use hash joins, like 10g, we get a much better plan:

-------------------------------------------------------------------------------
| Id | Operation | Name | Bytes | Cost |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 7587K| 3704 |
| 1 | SORT ORDER BY | | 7587K| 3704 |
|* 2 | HASH JOIN OUTER | | 7587K| 822 |
|* 3 | HASH JOIN OUTER | | 5262K| 616 |
|* 4 | HASH JOIN OUTER | | 2980K| 465 |
|* 5 | HASH JOIN OUTER | | 710K| 432 |
|* 6 | HASH JOIN OUTER | | 398K| 302 |
| 7 | VIEW | ALL_TABLES | 342K| 276 |
| 29 | VIEW | ALL_MVIEWS | 51 | 20 |
| 50 | VIEW | ALL_PART_TABLES | 852K| 104 |
| 78 | VIEW | ALL_TAB_COMMENTS | 2043 | 14 |
| 93 | VIEW | ALL_EXTERNAL_LOCATIONS | 1744K| 31 |
| 106 | VIEW | ALL_EXTERNAL_TABLES | 1777K| 28 |
-------------------------------------------------------------------------------

That's much more like it. This drops the execution time down to 24 seconds. Not as good as 10g, but still an improvement. There are still several problems with this, however.

10g introduced a new join method - a right outer hash join (used in the first execution plan). The 9i query optimizer doesn't have this option available, so forcing a hash join means it has to hash the ALL_TABLES table, and furthermore re-hash it for every hash join in the execution plan; this could be thousands and thousands of rows.

And although forcing hash joins somewhat alleviates this problem on our test systems, there's no guarantee that this will improve the execution time on customers' systems; it may even increase the time it takes (say, if all their tables are partitioned, or they've got a lot of materialized views). Ideally, we would want a solution that provides a speedup whatever the input.

To try and get some ideas, we asked some oracle performance specialists to see if they had any ideas or tips. Their recommendation was to add a hidden hook into the product that allowed users to specify their own query hints, or even rewrite the queries entirely. However, we would prefer not to take that approach; as well as a lot of new infrastructure & a rewrite of the population code, it would have meant that any users of 9i would have to spend some time optimizing it to get it working on their system before they could use the product. Another approach was needed.

All our population queries have a very specific pattern - a base table provides most of the information we need (ALL_TABLES for tables, or ALL_TAB_COLS for columns) and we do a left join to extra subsidiary tables that fill in gaps (for instance, ALL_PART_TABLES for partition information). All the left joins use the same set of columns to join on (typically the object owner & name), so we could re-use the hash information for each join, rather than re-hashing the same columns for every join.

To allow us to do this, along with various other performance improvements that could be done for the specific query pattern we were using, we read all the tables individually and do a hash join on the client. Fortunately, this 'pure' algorithmic problem is the kind that can be very well optimized for expected real-world situations; as well as storing row data we're not using in the hash key on disk, we use very specific memory-efficient data structures to store all the information we need. This allows us to achieve a database population time that is as fast as on 10g, and even (in some situations) slightly faster, and a memory overhead of roughly 150 bytes per row of data in the result set (for schemas with 10,000 tables in that means an extra 1.4MB memory being used during population).

Next: fun with the 9i dictionary views.

© Simple Talk or respective owner