Help: SQL Server

Sharing my knowlege about SQL Server Troubleshooting Skills

  • Blog Stats

    • 2,105,294 hits
  • Select GETDATE()

    December 2013
    M T W T F S S
     1
    2345678
    9101112131415
    16171819202122
    23242526272829
    3031  

A-Z of In-Memory OLTP : How In-Memory Table is made durable – Transaction Logging Concepts

Posted by blakhani on December 26, 2013


If you ever visit a bank and do some financial transaction, you would notice that bank employee would note the information about the transaction in some format (paper based or electronic based). In the same way for SQL Server logging of transaction is needed. Hence, transaction log (also known as T-Log) file is an integral part of database. For the RDBMS to be durable, logging of transaction is done before making any modification. If an in-memory table is defined as “durable” then logging has to be done for the table. The logging of transaction is done differently for in-memory tables and that’s why I am dedicating a blog post only on this topic in A-Z Series.

There are few major differences between transaction logging of disk based tables versus in-memory tables. I would list them out here:

  • Multiple changes in table are merged into single log record.
  • Log records are not generated for any modification of Index. The index on the table is rebuild during startup of the database.
  • Transaction log records are written only if the transaction on in-memory table is committed. This would mean that no UNDO information would be logged.

Due to all above, the amount of information logged into transaction log is less and also efficient as compared to disk based tables. This makes in-memory OLTP scalable and high performance engine.

Let’s understand each point by demo(s).

Multiple changes in table are merged into single log record

To check this behavior, we could create two identical tables – Disk based and In-Memory table. Here is the script which can be used. This is the same script which you might have seen earlier in this blog.
-- Create database with IMO Filegroup, If exists drop it.
Use Master
go
If db_id('SQLSeverHelp_IMO') is NOT NULL
drop database SQLSeverHelp_IMO
GO
CREATE DATABASE SQLSeverHelp_IMO 
    ON PRIMARY (
    NAME = [SQLSeverHelp_IMO_data]
    ,FILENAME = 'C:\IMO_Database\SQLSeverHelp_IMO.mdf'
    )
    
    ,FILEGROUP [SQLSeverHelp_IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA ( 
    NAME = [SQLSeverHelp_IMO_dir]
    ,FILENAME = 'C:\IMO_Database\SQLSeverHelp_IMO_dir'
    ) 
    
    LOG ON (
    NAME = [SQLSeverHelp_IMO_log]
    ,Filename = 'C:\IMO_Database\SQLSeverHelp_IMO_log.ldf'
    ,SIZE = 102400KB 
    ) 
GO


-- Create table in database

-- Normal (disk based) table
Use SQLSeverHelp_IMO 
go
CREATE TABLE DiskBasedTable
(
iID INT NOT NULL PRIMARY KEY NONCLUSTERED,
vFName CHAR(20) NOT NULL,
vLName CHAR(20) NOT NULL
)
GO

-- In-Memory table
Use SQLSeverHelp_IMO
GO
CREATE TABLE MyFirstMemporyOptimizedTable
(
iID INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000),
vFName CHAR(20) NOT NULL,
vLName CHAR(20) NOT NULL
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO

Now, I am going to insert 100 rows in Disk based table.

-- Insert into Disk Based Table
begin tran
DECLARE @counter INT 
SET @counter = 1 
WHILE @counter <= 100
  BEGIN 
      INSERT INTO DiskBasedTable 
      VALUES      (@counter, 'SQLServer-Help.com','Balmukund Lakhani')
      SET @counter = @counter + 1 
  END
 commit tran

Let’s examine the transaction log by well-known, undocumented (and unsupported command) fn_dblog

SELECT *
FROM sys.fn_dblog(NULL, NULL)
WHERE PartitionId IN (
        SELECT partition_id
        FROM sys.partitions
        WHERE object_id = object_id('DiskBasedTable')
        )
ORDER BY [Current LSN] ASC;
GO

As shown below, we can see 200 rows for our table (DiskBasedTable). LOP_INSERT_ROWS operation is logged for HEAP and INDEX. Since we have inserted 100 rows, it has caused 200 log records i.e. two per row. In Operation column – LOP stands for logical operation.

image

-- Total size of ALL Log Records 
SELECT sum([Log Record Length])
FROM sys.fn_dblog(NULL, NULL)
WHERE PartitionId IN (
        SELECT partition_id
        FROM sys.partitions
        WHERE object_id = object_id('DiskBasedTable')
        )

Above query shows output as 26000= ~25 KB. Now, we will do the same insert operation for in-memory table (MyFirstMemporyOptimizedTable) and look at log records.

begin tran
DECLARE @counter INT 
SET @counter = 1
WHILE @counter <= 100
  BEGIN 
      INSERT INTO MyFirstMemporyOptimizedTable
      VALUES      (@counter, 'SQLServer-Help.com','Balmukund Lakhani')
      SET @counter = @counter + 1 
  END
 commit tran
Let’s examine log records for this operation. I am showing only top 3 records which are relevant for our discussion.
SELECT *
FROM sys.fn_dblog(NULL, NULL)
ORDER BY [Current LSN] ASC
GO
 
image

As we can see above that we have total three log records for 100 rows inserted. Have a look at length of the LOP_HK log record, its ~6 KB. We can use another undocumented function fn_dblog_xtp to crack-open that record. HK in the operation column is the artifact of feature’s code name Hekaton.

 -- Lets break-apart the LOP_HK
SELECT [current lsn], [transaction id], operation,
operation_desc, tx_end_timestamp, total_size,
object_name(table_id) AS TableName
FROM sys.fn_dblog_xtp(null, null)
WHERE [Current LSN] = '0000001f:0000011c:0002'  -- this is same as seen in earlier image

Here is the output in SSMS:

image

Top and bottom (Row # 1 and Row # 102) rows are HK_LOP_BEGIN_TX and HK_LOP_COMMIT_TX. HK_LOP_INSERT_ROW is seen 100 times. Above proves the concept that amount of data logged for in-memory table is lesser as compared to disk-based tables.

Log records are not generated for any modification of Index

In last screenshot, we can see that we don’t have any log records which contains information about index modification as oppose to disk-based table where we saw LCK_INDEX_LEAF.

Transaction log records are written only if the transaction on in-memory table is committed

For In-Memory OLTP transactions, log records are generated only at commit time. As oppose to disk-based table, in-memory OLTP doesn’t follow write-ahead logging (WAL) protocol. Not using WAL protocol makes it highly efficient. To prove this point, we would create a transaction and roll it back. We would again use fn_dblog and count the number of rows.

Select count(*) from fn_dblog(null, null)
go
-- 1406 Rows
begin tran
update DiskBasedTable 
set vFName =
'ExtremeExperts.com' where iID = 10 rollback tran go Select count(*) from fn_dblog(null, null) go Select * from fn_dblog(null, null) order by [Current LSN] desc

Below is an update which was rolled back and it has generated 4 log records.

image

LOP_BEGIN_XACT stands for begin transaction and LOP_ABORT_XACT stands for rollback/abort of transaction. Now, let’s do the same operation for in-memory table.
Select count(*) from fn_dblog(null, null)
go
-- 1410 Rows
begin tran
update MyFirstMemporyOptimizedTable WITH (SNAPSHOT)
set vFName = 'ExtremeExperts.com'
where iID = 10
rollback tran
go
Select count(*) from fn_dblog(null, null)
go
--1410 Rows

image

Before and after rows in transaction log file is same (i.e. 1410) which means no additional rows are generated for a uncommitted transaction, if it’s done for in-memory table.

I hope that now you are clear about the difference in logging. In summary, we can say that logging of transaction on in-memory table is much more efficient as compare to disk based table transactions logging. Stay tuned for more on Hekaton.

  • Cheers,
  • Balmukund Lakhani
  • Twitter @blakhani
  • Author: SQL Server 2012 AlwaysOnPaperback, Kindle
  • 13 Responses to “A-Z of In-Memory OLTP : How In-Memory Table is made durable – Transaction Logging Concepts”

    1. Aman Ankit said

      Very well explained 🙂

    2. Abhay said

      Hi Balu sir,
      As always an excellent post. But I have a query. I see that for 100 inserts the fn_dblog did show only 1 record of 6KB. However, you then used fn_dblog_xtp and it actually showed the breakdown of that 1 command (LSN and transactionIDs are same). where is this extra information of 102 rows is stored? Is it not stored in transaction log as well? Even though its not 200 rows but 50% of it. So, is it true to say that even though we see only 1 record via fn_dblog, there are more entries but fn_dblog does not show them. Is it also correct that if we use fn_dblog_xtp for disk based table then we might see more than 200 rows ?…

      Kind regards
      Abhay

      • blakhani said

        @Abhay – Thanks for reading and asking questions.

        Question: where is this extra information of 102 rows is stored? Is it not stored in transaction log as well?
        Answer: Transaction logging for disk-based table and in-memory tables is done in same transaction log. The format of the record is different and fn_dblog doesn’t understand that. That’s why new function fb_dblog_xtp is introduced to break apart a LOP_HK record. LogRecord column contains more data and that’s why length of the record is more.

        Question: Is it also correct that if we use fn_dblog_xtp for disk based table then we might see more than 200 rows ?…
        Answer: No. Remember that xtp and HK are used for in-memory OLTP. New function is only to crack open those records which are of type ‘LOP_HK’ . for other records, it would show only one row (same information).

    3. […] « A-Z of In-Memory OLTP : How In-Memory Table is made durable – Transaction Logging Concept… […]

    4. […] have already discussed “How In-Memory Table is durable” in this blog. Since transaction logging is not done for non-durable table, the data insertion is faster […]

    5. manu said

      Complex functionality explained in lucid manner, thanks a ton!!

    6. […] of the theory which we learned earlier, we would use “XTP Transaction Log” counter for demo. In this blog about transaction logging concepts, we read “Transaction log records are written only if the […]

    7. Great explanation!!

    8. Hi Balu sir,
      This is based on your point:
      “Transaction log records are written only if the transaction on in-memory table is committed. This would mean that no UNDO information would be logged.”

      This means that SQL server will not use WAL protocol starting 2014. Am I right. If this is correct, then what will happen to the UNDO data(keeping the point in mind that data file may contain both committed and uncommitted transactions at the time of crash ?

      Regards
      Abhay

      • blakhani said

        1. WAL is still followed. Write Ahead Logging – Log before you make final change. You may want to read WAL again on http://support.microsoft.com/kb/230785
        2. In-Memory tables data doesn’t go to MDF file. They are stored in Data and Delta File.
        3. Recovery mechanism is different as compared to regular disk based tables.

    9. […] A-Z of In-Memory OLTP : How In-Memory Table is made durable – Transaction Logging Concepts […]

    Leave a comment

    This site uses Akismet to reduce spam. Learn how your comment data is processed.