Help: SQL Server

Sharing my knowlege about SQL Server Troubleshooting Skills

  • Blog Stats

    • 2,050,832 hits
  • Select GETDATE()

    January 2014
    M T W T F S S
     12345
    6789101112
    13141516171819
    20212223242526
    2728293031  

A-Z of In-Memory OLTP : Durable vs. Non-durable tables

Posted by blakhani on January 28, 2014


Can you think of a situation where you don’t want to log anything in SQL Server and you are OK even if data is lost? What comes to your mind? Staging tables used in loading phase of Data warehouses? ASP Session state? Temporary logging tables?

If you have such situations where data is temporary and you are OK to lose the data due to SQL Server restart then you can think of non-durable tables in SQL Server in-memory OLTP. By default, tables are durable. We have briefly discussed here in earlier blog of this series. This is defined by DURABILITY parameter during created table syntax. This indicates whether we want data in the table to be persisted when database is restarted (via offline/online, SQL Service restart or SQL crash). Non durable tables would have SCHEMA_ONLY as the value passed to the durability parameter. Note that these tables are different from the temporary table which we create in tempdb (#table and ##table). One major difference is that tempdb tables are not memory optimized and they won’t be able to survive after restart, including schema.

We have already discussed “How In-Memory Table is durable” in this blog. Since transaction logging is not done for non-durable table, the data insertion is faster (actually much.. much faster) than durable table, of course at the cost of data durability. Let’s have a look at them in action. We would created two tables with same schema, one being durable (SCHEMA_AND_DATA durability) and another being non-durable (SCHEMA_DURABILITY)

-- Create database with IMO Filegroup, If exists drop it.
Use Master
go
If db_id('SQLSeverHelp_IMO') is NOT NULL
drop database SQLSeverHelp_IMO
GO
CREATE DATABASE SQLSeverHelp_IMO 
    ON PRIMARY (
    NAME = [SQLSeverHelp_IMO_data]
    ,FILENAME = 'C:\IMO_Database\SQLSeverHelp_IMO.mdf'
    )
    
    ,FILEGROUP [SQLSeverHelp_IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA ( 
    NAME = [SQLSeverHelp_IMO_dir]
    ,FILENAME = 'C:\IMO_Database\SQLSeverHelp_IMO_dir'
    ) 
    
    LOG ON (
    NAME = [SQLSeverHelp_IMO_log]
    ,Filename = 'C:\IMO_Database\SQLSeverHelp_IMO_log.ldf'
    ) 
GO


-- In-Memory table, Durable
Use SQLSeverHelp_IMO
GO
CREATE TABLE DurableInMemoryOptimizedTable
(
iID INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000),
vFName VARCHAR(20) NOT NULL,
vLName VARCHAR(20) NOT NULL
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO

-- In-Memory table, non-durable
Use SQLSeverHelp_IMO
GO
CREATE TABLE NonDurableInMemoryOptimizedTable
(
iID INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000),
vFName VARCHAR(20) NOT NULL,
vLName VARCHAR(20) NOT NULL
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY)
GO

Since both tables are memory optimized, both would have their own DLLs generated. We have discussed DLLs for table in earlier blog here.

image

Let’s load same amount of data in both tables. I have randomly selected 90000 rows (multiple of three – as I have to insert three important names). First I am inserting into Durable table using below script.

-- Inserting 90000 rows into Durable table.
set nocount on
go
DECLARE @counter INT 
SET @counter = 1

WHILE @counter <= 90000 
  BEGIN 
      INSERT INTO dbo.DurableInMemoryOptimizedTable 
      VALUES      (@counter, 'SQLServer-Help.com','Balmukund Lakhani'),
                  (@counter+1, 'ExtremeExperts.com','Vinod Kumar M'),
                  (@counter+2, 'SQLAuthority.com','Pinal Dave') 

      SET @counter = @counter + 3
  END

image

as we can see it was around 30 seconds on my VM. and now.. non-durable table using below script.

set nocount on
go
DECLARE @counter INT 
SET @counter = 1

WHILE @counter <= 90000 
  BEGIN 
      INSERT INTO dbo.NonDurableInMemoryOptimizedTable 
      VALUES      (@counter, 'SQLServer-Help.com','Balmukund Lakhani'),
                  (@counter+1, 'ExtremeExperts.com','Vinod Kumar M'),
                  (@counter+2, 'SQLAuthority.com','Pinal Dave') 

      SET @counter = @counter + 3
  END

image

How much time do you see in above screen shot? zero ?Trust me, it was almost instantaneous! You don’t trust me? Let count rows from both the tables.

-- Verify data is present in both tables.
select COUNT(*) from NonDurableInMemoryOptimizedTable
go
select COUNT(*) from DurableInMemoryOptimizedTable
go

image

Still don’t trust me? Try yourself!

Now, the test of survival! I would take database offline and bring it online. For SCHEMA_ONLY durable table, the data would be cleaned, but schema would remain. Let’s test.

-- take database offline and bring online.
Use Master
go
ALTER DATABASE SQLSeverHelp_IMO SET OFFLINE WITH ROLLBACK IMMEDIATE
GO
ALTER DATABASE SQLSeverHelp_IMO SET ONLINE
GO
-- count the rows again
USE SQLSeverHelp_IMO
go
select COUNT(*) from NonDurableInMemoryOptimizedTable
go
select COUNT(*) from DurableInMemoryOptimizedTable
go

image

As expected, rows in NonDurableInMemoryOptimizedTable couldn’t survive database restart.

To summarize, for non durable table only schema (table definition) would be available after restart/reboot/crash. In other words, it’s a permanent table with temporary data (consistent till restart). Non-durable table are faster in data loading at the cost of data durability. They would be a perfect fit for certain requirements need but useless for others. You MUST evaluate and understand risk of non durable tables before putting them in production. 

Hope you have learned something new.

  • Cheers,
  • Balmukund Lakhani
  • Twitter @blakhani
  • Author: SQL Server 2012 AlwaysOnPaperback, Kindle
  • Advertisement

    5 Responses to “A-Z of In-Memory OLTP : Durable vs. Non-durable tables”

    1. manu said

      Thanks for sharing, this type of feature will surely help multiple applications.

    2. […] « A-Z of In-Memory OLTP : Durable vs. Non-durable tables […]

    3. Arbaz Abid said

      very nice article, thanks for explanation. I recently optimized my database with OLTP (SCHEMA_AND_DATA) and performance is 8 times faster now. Database have 12 mil records.

    4. Nice article, thanks for explanation. I recently optimized a database with OLTP (SCHEMA_AND_DATA) and performance is now 4x faster. Database have over 12 mil records. using OLTP (SCHEMA_AND_DATA) with indexes is way to go for large databases.

    Leave a Reply to blakhani Cancel reply

    Fill in your details below or click an icon to log in:

    WordPress.com Logo

    You are commenting using your WordPress.com account. Log Out /  Change )

    Facebook photo

    You are commenting using your Facebook account. Log Out /  Change )

    Connecting to %s

    This site uses Akismet to reduce spam. Learn how your comment data is processed.

     
    %d bloggers like this: