Help: SQL Server

Sharing my knowlege about SQL Server Troubleshooting Skills

  • Blog Stats

    • 2,037,802 hits
  • Select GETDATE()

    December 2013
    M T W T F S S
     1
    2345678
    9101112131415
    16171819202122
    23242526272829
    3031  

Archive for December 26th, 2013

A-Z of In-Memory OLTP : How In-Memory Table is made durable – Transaction Logging Concepts

Posted by blakhani on December 26, 2013


If you ever visit a bank and do some financial transaction, you would notice that bank employee would note the information about the transaction in some format (paper based or electronic based). In the same way for SQL Server logging of transaction is needed. Hence, transaction log (also known as T-Log) file is an integral part of database. For the RDBMS to be durable, logging of transaction is done before making any modification. If an in-memory table is defined as “durable” then logging has to be done for the table. The logging of transaction is done differently for in-memory tables and that’s why I am dedicating a blog post only on this topic in A-Z Series.

There are few major differences between transaction logging of disk based tables versus in-memory tables. I would list them out here:

  • Multiple changes in table are merged into single log record.
  • Log records are not generated for any modification of Index. The index on the table is rebuild during startup of the database.
  • Transaction log records are written only if the transaction on in-memory table is committed. This would mean that no UNDO information would be logged.

Due to all above, the amount of information logged into transaction log is less and also efficient as compared to disk based tables. This makes in-memory OLTP scalable and high performance engine.

Let’s understand each point by demo(s).

Multiple changes in table are merged into single log record

To check this behavior, we could create two identical tables – Disk based and In-Memory table. Here is the script which can be used. This is the same script which you might have seen earlier in this blog.
-- Create database with IMO Filegroup, If exists drop it.
Use Master
go
If db_id('SQLSeverHelp_IMO') is NOT NULL
drop database SQLSeverHelp_IMO
GO
CREATE DATABASE SQLSeverHelp_IMO 
    ON PRIMARY (
    NAME = [SQLSeverHelp_IMO_data]
    ,FILENAME = 'C:\IMO_Database\SQLSeverHelp_IMO.mdf'
    )
    
    ,FILEGROUP [SQLSeverHelp_IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA ( 
    NAME = [SQLSeverHelp_IMO_dir]
    ,FILENAME = 'C:\IMO_Database\SQLSeverHelp_IMO_dir'
    ) 
    
    LOG ON (
    NAME = [SQLSeverHelp_IMO_log]
    ,Filename = 'C:\IMO_Database\SQLSeverHelp_IMO_log.ldf'
    ,SIZE = 102400KB 
    ) 
GO


-- Create table in database

-- Normal (disk based) table
Use SQLSeverHelp_IMO 
go
CREATE TABLE DiskBasedTable
(
iID INT NOT NULL PRIMARY KEY NONCLUSTERED,
vFName CHAR(20) NOT NULL,
vLName CHAR(20) NOT NULL
)
GO

-- In-Memory table
Use SQLSeverHelp_IMO
GO
CREATE TABLE MyFirstMemporyOptimizedTable
(
iID INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000),
vFName CHAR(20) NOT NULL,
vLName CHAR(20) NOT NULL
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO

Now, I am going to insert 100 rows in Disk based table.

-- Insert into Disk Based Table
begin tran
DECLARE @counter INT 
SET @counter = 1 
WHILE @counter <= 100
  BEGIN 
      INSERT INTO DiskBasedTable 
      VALUES      (@counter, 'SQLServer-Help.com','Balmukund Lakhani')
      SET @counter = @counter + 1 
  END
 commit tran

Let’s examine the transaction log by well-known, undocumented (and unsupported command) fn_dblog

SELECT *
FROM sys.fn_dblog(NULL, NULL)
WHERE PartitionId IN (
        SELECT partition_id
        FROM sys.partitions
        WHERE object_id = object_id('DiskBasedTable')
        )
ORDER BY [Current LSN] ASC;
GO

As shown below, we can see 200 rows for our table (DiskBasedTable). LOP_INSERT_ROWS operation is logged for HEAP and INDEX. Since we have inserted 100 rows, it has caused 200 log records i.e. two per row. In Operation column – LOP stands for logical operation.

image

-- Total size of ALL Log Records 
SELECT sum([Log Record Length])
FROM sys.fn_dblog(NULL, NULL)
WHERE PartitionId IN (
        SELECT partition_id
        FROM sys.partitions
        WHERE object_id = object_id('DiskBasedTable')
        )

Above query shows output as 26000= ~25 KB. Now, we will do the same insert operation for in-memory table (MyFirstMemporyOptimizedTable) and look at log records.

begin tran
DECLARE @counter INT 
SET @counter = 1
WHILE @counter <= 100
  BEGIN 
      INSERT INTO MyFirstMemporyOptimizedTable
      VALUES      (@counter, 'SQLServer-Help.com','Balmukund Lakhani')
      SET @counter = @counter + 1 
  END
 commit tran
Let’s examine log records for this operation. I am showing only top 3 records which are relevant for our discussion.
SELECT *
FROM sys.fn_dblog(NULL, NULL)
ORDER BY [Current LSN] ASC
GO
 
image

As we can see above that we have total three log records for 100 rows inserted. Have a look at length of the LOP_HK log record, its ~6 KB. We can use another undocumented function fn_dblog_xtp to crack-open that record. HK in the operation column is the artifact of feature’s code name Hekaton.

 -- Lets break-apart the LOP_HK
SELECT [current lsn], [transaction id], operation,
operation_desc, tx_end_timestamp, total_size,
object_name(table_id) AS TableName
FROM sys.fn_dblog_xtp(null, null)
WHERE [Current LSN] = '0000001f:0000011c:0002'  -- this is same as seen in earlier image

Here is the output in SSMS:

image

Top and bottom (Row # 1 and Row # 102) rows are HK_LOP_BEGIN_TX and HK_LOP_COMMIT_TX. HK_LOP_INSERT_ROW is seen 100 times. Above proves the concept that amount of data logged for in-memory table is lesser as compared to disk-based tables.

Log records are not generated for any modification of Index

In last screenshot, we can see that we don’t have any log records which contains information about index modification as oppose to disk-based table where we saw LCK_INDEX_LEAF.

Transaction log records are written only if the transaction on in-memory table is committed

For In-Memory OLTP transactions, log records are generated only at commit time. As oppose to disk-based table, in-memory OLTP doesn’t follow write-ahead logging (WAL) protocol. Not using WAL protocol makes it highly efficient. To prove this point, we would create a transaction and roll it back. We would again use fn_dblog and count the number of rows.

Select count(*) from fn_dblog(null, null)
go
-- 1406 Rows
begin tran
update DiskBasedTable 
set vFName =
'ExtremeExperts.com' where iID = 10 rollback tran go Select count(*) from fn_dblog(null, null) go Select * from fn_dblog(null, null) order by [Current LSN] desc

Below is an update which was rolled back and it has generated 4 log records.

image

LOP_BEGIN_XACT stands for begin transaction and LOP_ABORT_XACT stands for rollback/abort of transaction. Now, let’s do the same operation for in-memory table.
Select count(*) from fn_dblog(null, null)
go
-- 1410 Rows
begin tran
update MyFirstMemporyOptimizedTable WITH (SNAPSHOT)
set vFName = 'ExtremeExperts.com'
where iID = 10
rollback tran
go
Select count(*) from fn_dblog(null, null)
go
--1410 Rows

image

Before and after rows in transaction log file is same (i.e. 1410) which means no additional rows are generated for a uncommitted transaction, if it’s done for in-memory table.

I hope that now you are clear about the difference in logging. In summary, we can say that logging of transaction on in-memory table is much more efficient as compare to disk based table transactions logging. Stay tuned for more on Hekaton.

  • Cheers,
  • Balmukund Lakhani
  • Twitter @blakhani
  • Author: SQL Server 2012 AlwaysOnPaperback, Kindle
  • Advertisement

    Posted in A - Z Series, In Memory OLTP, In-Memory OLTP, SQL Server 2014 | Tagged: , , , , , , , , | 13 Comments »