Investigating on xVelocity (VertiPaq) column size
Posted
by Marco Russo (SQLBI)
on SQL Blog
See other posts from SQL Blog
or by Marco Russo (SQLBI)
Published on Mon, 19 Mar 2012 11:15:00 GMT
Indexed on
2012/03/19
18:11 UTC
Read the original article
Hit count: 270
In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized.
A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized.
A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way.
SELECT *
FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS
ORDER BY used_size DESC
You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are:
- TABLE_ID: it is the name of the object – it can be also a dictionary or an index
- COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes
- RECORDS_COUNT: it is the number of rows in the column
- USED_SIZE: it is the used memory for the object
By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are:
- Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model
- Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you.
- Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article.
- Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio.
After the optimization you should be able to reduce the used size and improve the count/size ration you measured before.
If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.
© SQL Blog or respective owner