SPARC T4-4 Delivers World Record Performance on Oracle OLAP Perf Version 2 Benchmark
- by Brian
Oracle's SPARC T4-4 server delivered world record performance
with subsecond response time
on the Oracle OLAP Perf Version 2 benchmark using
Oracle Database 11g Release 2
running on Oracle Solaris 11.
The SPARC T4-4 server achieved throughput of
430,000 cube-queries/hour
with an average response time of 0.85 seconds and the
median response time of 0.43 seconds.
This was achieved by using only 60% of the available CPU resources
leaving plenty of headroom for future growth.
The SPARC T4-4 server operated on an Oracle OLAP cube
with a 4 billion row fact table of sales data containing 4 dimensions.
This represents as many as 90 quintillion aggregate rows
(90 followed by 18 zeros).
Performance Landscape
Oracle OLAP Perf Version 2 Benchmark
4 Billion Fact Table Rows
System
Queries/hour
Users*
Response Time (sec)
Average
Median
SPARC T4-4
430,000
7,300
0.85
0.43
* Users - the supported number of users with a
given think time of 60 seconds
Configuration Summary and Results
Hardware Configuration:
SPARC T4-4 server with
4 x SPARC T4 processors, 3.0 GHz
1 TB memory
Data Storage
1 x Sun Fire X4275 (using COMSTAR)
2 x Sun Storage F5100 Flash Array (each with 80 FMODs)
Redo Storage
1 x Sun Fire X4275 (using COMSTAR with 8 HDD)
Software Configuration:
Oracle Solaris 11 11/11
Oracle Database 11g Release 2 (11.2.0.3) with Oracle OLAP option
Benchmark Description
The Oracle OLAP Perf Version 2 benchmark is a workload designed to
demonstrate and stress the Oracle OLAP product's core features
of fast query, fast update, and rich calculations on a multi-dimensional
model to support enhanced Data Warehousing.
The bulk of the benchmark entails running a number of concurrent
users, each issuing typical multidimensional queries against an
Oracle OLAP cube consisting of a number of years of sales data with
fully pre-computed aggregations. The cube has four dimensions: time,
product, customer, and channel. Each query user issues approximately
150 different queries. One query chain may ask for total sales in a
particular region (e.g South America) for a particular time period
(e.g. Q4 of 2010) followed by additional queries which drill
down into sales for individual countries (e.g. Chile, Peru, etc.) with
further queries drilling down into individual stores, etc.
Another query chain may ask for yearly comparisons of total
sales for some product category (e.g. major household appliances) and
then issue further queries drilling down into particular
products (e.g. refrigerators, stoves. etc.), particular regions,
particular customers, etc.
Results from version 2 of the benchmark are not comparable with
version 1. The primary difference is the type of queries
along with the query mix.
Key Points and Best Practices
Since typical BI users are often likely to issue similar queries, with different
constants in the where clauses, setting the init.ora prameter "cursor_sharing"
to "force" will provide for additional query throughput and a larger
number of potential users. Except for this setting, together with making full use
of available memory, out of the box performance for the OLAP Perf workload should
provide results similar to what is reported here.
For a given number of query users with zero think time,
the main measured
metrics are the average query response time, the median query response
time, and the query throughput. A derived metric is the maximum number
of users the system can support achieving the measured response time
assuming some non-zero think time. The calculation of the maximum
number of users follows from the well-known response-time law
N = (rt + tt) * tp
where rt is the average response time, tt is the think time and tp is
the measured throughput.
Setting tt to 60 seconds, rt to 0.85 seconds and tp to 119.44 queries/sec
(430,000 queries/hour), the above formula shows that the T4-4 server will support
7,300 concurrent users with a think time of 60 seconds and an average response time
of 0.85 seconds.
For more information see chapter 3 from the book "Quantitative System Performance" cited below.
--
See Also
Quantitative System Performance
Computer System Analysis Using Queueing Network Models
Edward D. Lazowska, John Zahorjan, G. Scott Graham,
Kenneth C. Sevcik
external
local
Oracle Database 11g – Oracle OLAP
oracle.com
OTN
SPARC T4-4 Server
oracle.com
OTN
Oracle Solaris
oracle.com
OTN
Oracle Database 11g Release 2
oracle.com
OTN
Disclosure Statement
Copyright 2012, Oracle and/or its affiliates. All rights reserved.
Oracle and Java are registered trademarks of Oracle and/or its
affiliates. Other names may be trademarks of their respective
owners. Results as of 11/2/2012.