SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP
Posted
by Pinal Dave
on SQL Authority
See other posts from SQL Authority
or by Pinal Dave
Published on Fri, 08 Aug 2014 01:30:32 +0000
Indexed on
2014/08/18
16:33 UTC
Read the original article
Hit count: 674
SQL Server
|SQL Performance
|sql
|SQL Query
|SQL Tips and Tricks
|SQL Authority
|T SQL
|PostADay
|Koenig
In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload.
The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes.
Points to remember
- Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP.
- Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit.
- Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table.
- Interpreted Transact-SQL stored procedures, which is what SQL Server has always used.
- Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables.
- Interop refers to interpreted Transact-SQL that references memory-optimized tables.
Using In-Memory OLTP
In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions.
Creating Databases
Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables:
CREATE DATABASE InMemoryDB
ON
PRIMARY(NAME = [InMemoryDB_data],
FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB),
FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA
(NAME = [InMemoryDB_mod_dir],
FILENAME = 'S:\data\InMemoryDB_mod_dir'),
(NAME = [InMemoryDB_mod_dir],
FILENAME = 'R:\data\InMemoryDB_mod_dir')
LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB)
COLLATE Latin1_General_100_BIN2;
Above example code creates files on three different drives (D: S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables.
It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same.
ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA;
GO
ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod;
GO
Creating Tables
There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example.
DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY)
Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema.
Indexing Memory Optimized Table
A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified.
CREATE TABLE Mem_Table
(
[Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000),
[City] VARCHAR(32) NULL,
[State_Province] VARCHAR(32) NULL,
[LastModified] DATETIME NOT NULL,
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);
Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables.
We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well.
Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables.
I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise.
USE InMemoryDB
GO
-- Creating a disk based table
CREATE TABLE dbo.Disktable
(
Id INT IDENTITY,
Name CHAR(40)
)
GO
CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id)
GO
-- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA
CREATE TABLE dbo.Memorytable_durable
(
Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000),
Name CHAR(40)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO
-- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only
CREATE TABLE dbo.Memorytable_nondurable
(
Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000),
Name CHAR(40)
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only)
GO
-- Now insert 100000 records in dbo.Disktable and observe the Time Taken
DECLARE @i_t bigint
SET @i_t =1
WHILE @i_t<= 100000
BEGIN
INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t))
SET @i_t+=1
END
-- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken
DECLARE @i_t bigint
SET @i_t =1
WHILE @i_t<= 100000
BEGIN
INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t))
SET @i_t+=1
END
-- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken
DECLARE @i_t bigint
SET @i_t =1
WHILE @i_t<= 100000
BEGIN
INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t))
SET @i_t+=1
END
The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM.
This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates.
Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand the fundamentals of IN-Memory OLTP .
Reference: Pinal Dave (http://blog.sqlauthority.com)
Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig
© SQL Authority or respective owner