Help: SQL Server

Sharing my knowlege about SQL Server Troubleshooting Skills

  • Blog Stats

    • 2,156,094 hits
  • Select GETDATE()

    February 2026
    M T W T F S S
     1
    2345678
    9101112131415
    16171819202122
    232425262728  

Posts Tagged ‘learning’

A-Z of In-Memory OLTP : Storage structure of rows

Posted by blakhani on January 7, 2014


In our earlier posts we have learned about various concepts. In this post, we would understand how the data is actually stored in the row. This would form a foundation to understand indexes created on in-memory table. There are major difference in storage format of disk based table vs. in-memory table. Here are few of them.

  • There is no 8 KB page concept for in-memory table and so there is no concept of extent.
  • Rows belonging to same table need not be in “near” other rows.
  • The rows are linked to each other via index pointers and that’s why we need at least one index on in-memory table.
  • No concept of clustered index because there is no sorting of data rows.
  • Theoretically rows are free flowing and there is not limit on size of row BUT since we are allowed to move the data to disk-based table (using insert), there is a limit of row size to 8060 bytes.
  • Every data type is NOT allowed for columns (like varchar(max), LOB, sqlvariant etc.)

Here is the pictorial representation of a row:

image

Row Data is the actual “data” of the row, i.e. the values which we insert into table columns. On the other hand, Row Header is information about row itself. Row headed consists of many fields. Begin TimeStamp is a field which contains the transaction timestamp when row was inserted. End TimeStamp is a field which contains the transaction timestamp when row was deleted. Each Timestamp field is 8 bytes (16 total). This means that these two values are used to identify “visibility” of a row for a transaction. You might ask what would be the value of End Timestamp when a row is inserted? Well, it’s a special value which is referred at infinity. In my earlier post, we have learned that UPDATE = DELETE + INSERT. This means that if any update happens, we “close” the existing row by putting end timestamp and “open” a new row by inserting new row with new begin timestamp and end timestamp as infinity. Here is the example.

image

After the update transaction, the visibility of first row is only for those transactions which have timestamp between 100 and 200.

Another field in Row Header is Statement ID which is unique for each statement. This is 4 bytes filed. This field stores the ID of the statement which has created the row.

Next field is Index Link Count which specifies the number on indexes the row has. And final field in the row header is the pointer to the next row for the index. Since each pointer is 8 bytes so the length of this field depends on number of Index on row. If we have 4 indexes, the length of this field would be 4*8 = 32 bytes.

Once you are clear with row storage, it would be easy to understand about special indexes which are used for in-memory tables. In next blog posts we would discuss about hash indexes and range indexes.

Stay Tuned!

  • Cheers,
  • Balmukund Lakhani
  • Twitter @blakhani
  • Author: SQL Server 2012 AlwaysOnPaperback, Kindle
  • Posted in A - Z Series, In Memory OLTP, In-Memory OLTP, SQL Server 2014 | Tagged: , , , , , , , , , , , , | 3 Comments »

    A-Z of In-Memory OLTP : Troubleshooting database Startup and recovery failures

    Posted by blakhani on January 2, 2014


    As promised in my previous blog, I am going to demonstrate few failures of database startup and technique to find the cause and fix them. First trick which you should always remember is that whenever you see any “operating system error” with some number (2, 5, 32 etc.) try to find the text of that error number. The trick here is to use “Net HelpMSG” windows command to convert the number to text. Here are few quick samples.

    C:\WINDOWS\system32>net helpmsg 2
    The system cannot find the file specified.

    C:\WINDOWS\system32>net helpmsg 5
    Access is denied.

    C:\WINDOWS\system32>net helpmsg 32
    The process cannot access the file because it is being used by another process.

    image

    Now, we would first cause DLL generation failure during startup of database. As learned in earlier post, database startup (a.k.a. recovery) process would fail.

    image

    Here is what we would see in ERROLROG.

    2013-12-30 03:25:44.280 spid51       Starting up database ‘SQLSeverHelp_IMO’.
    2013-12-30 03:25:44.520 spid39s      restoreHkDatabase: DbId 8, Msg 41312, Level 16, State 4, Unable to call into the C compiler. GetLastError = 2.
    2013-12-30 03:25:44.520 spid39s      [ERROR] Database ID: [8]. Failed to load XTP checkpoint. Error code: 0x82000008. (e:\sql12_main_t\sql\ntdbms\hekaton\sqlhost\sqlmin\hkhostdb.cpp : 2558 – ‘RecoverHkDatabase’)

    Now, you would understand why I spoke about Net Helpmsg command. Error 2 = The system cannot find the file specified. Now, next task is to find out which path/file. To find any activity done by a process, one of the famous tool to use is Process Monitor (free tool by Mark Russinovich). I have run the tool and applied filter for Process Name = sqlservr.exe. Here is the filtered information from process monitor.

    image

    It means that SQLServr.exe is looking for CL.EXE under C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Binn\Xtp\VC\bin folder and unable to find the path. Indeed that is the problem. To do this failure, I have renamed bin folder to bin_xxx as shown below.

    image

    Once I renamed the folder back to “bin”, I was able to bring the database online.

    I have simulated another failure of DLL generation by making some changes on operating system level and restarted SQL Server.

    image

    Always check errorlog and here is what we would see.

    2013-12-31 20:01:35.040 spid20s      SetNamedSecurityInfoW failed for directory C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\xtp for reason 5.

    2013-12-31 20:01:35.040 spid20s      Error: 41334, Severity: 16, State: 2.
    2013-12-31 20:01:35.040 spid20s      The code generation directory ‘C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\xtp’ cannot be created or set up correctly.
    2013-12-31 20:01:35.060 spid20s      Error: 41334, Severity: 16, State: 2.
    2013-12-31 20:01:35.060 spid20s      The code generation directory ‘C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\xtp’ cannot be created or set up correctly.
    2013-12-31 20:01:35.060 spid20s      Error: 41334, Severity: 16, State: 2.
    2013-12-31 20:01:35.060 spid20s      The code generation directory ‘C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\xtp’ cannot be created or set up correctly.
    2013-12-31 20:01:35.170 spid19s      Error: 41334, Severity: 16, State: 2.
    2013-12-31 20:01:35.170 spid19s      The code generation directory ‘C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\xtp’ cannot be created or set up correctly.

    Important piece of error message is reason 5. If you recall what you read above, net helpmsg 5 = Access Denied.

    How would you troubleshoot? Again same tool, process monitor. If we filter for all access denied (right click on Access Denied text and say "Include"), it’s clear that Service account is trying to generate the DLL of table but unable to do so.

    image

    Checking properties of XTP folder reveals the same. To break the things, I given “deny” on that folder to SQL Service account.

    image

    Once I removed “deny” and gave “allow”, I was able to bring database online using below command

    Alter database SQLSeverHelp_IMO set online
    go

    Another possible reason of database not able to recover would be out-of-memory errors. Since all data needs to be loaded into memory before in-memory tables can be accessed. In-memory engine needs to have all the memory available to hold the data at startup of the database. If the system is low on memory or max server memory is set to too low, we might encounter recovery failure. In such situation, you may want to stop other processes running on the server so that SQL is having enough RAM to load in-memory tables.

    I hope that this blog has given you some hints about possible failures, troubleshooting tricks and their resolutions.

  • Cheers,
  • Balmukund Lakhani
  • Twitter @blakhani
  • Author: SQL Server 2012 AlwaysOnPaperback, Kindle
  • Posted in A - Z Series, In Memory OLTP, In-Memory OLTP, SQL Server 2014 | Tagged: , , , , , , , , , , , , | 11 Comments »

    A-Z of In-Memory OLTP : How In-Memory Table is made durable – Transaction Logging Concepts

    Posted by blakhani on December 26, 2013


    If you ever visit a bank and do some financial transaction, you would notice that bank employee would note the information about the transaction in some format (paper based or electronic based). In the same way for SQL Server logging of transaction is needed. Hence, transaction log (also known as T-Log) file is an integral part of database. For the RDBMS to be durable, logging of transaction is done before making any modification. If an in-memory table is defined as “durable” then logging has to be done for the table. The logging of transaction is done differently for in-memory tables and that’s why I am dedicating a blog post only on this topic in A-Z Series.

    There are few major differences between transaction logging of disk based tables versus in-memory tables. I would list them out here:

    • Multiple changes in table are merged into single log record.
    • Log records are not generated for any modification of Index. The index on the table is rebuild during startup of the database.
    • Transaction log records are written only if the transaction on in-memory table is committed. This would mean that no UNDO information would be logged.

    Due to all above, the amount of information logged into transaction log is less and also efficient as compared to disk based tables. This makes in-memory OLTP scalable and high performance engine.

    Let’s understand each point by demo(s).

    Multiple changes in table are merged into single log record

    To check this behavior, we could create two identical tables – Disk based and In-Memory table. Here is the script which can be used. This is the same script which you might have seen earlier in this blog.
    -- Create database with IMO Filegroup, If exists drop it.
    Use Master
    go
    If db_id('SQLSeverHelp_IMO') is NOT NULL
    drop database SQLSeverHelp_IMO
    GO
    CREATE DATABASE SQLSeverHelp_IMO 
        ON PRIMARY (
        NAME = [SQLSeverHelp_IMO_data]
        ,FILENAME = 'C:\IMO_Database\SQLSeverHelp_IMO.mdf'
        )
        
        ,FILEGROUP [SQLSeverHelp_IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA ( 
        NAME = [SQLSeverHelp_IMO_dir]
        ,FILENAME = 'C:\IMO_Database\SQLSeverHelp_IMO_dir'
        ) 
        
        LOG ON (
        NAME = [SQLSeverHelp_IMO_log]
        ,Filename = 'C:\IMO_Database\SQLSeverHelp_IMO_log.ldf'
        ,SIZE = 102400KB 
        ) 
    GO
    
    
    -- Create table in database
    
    -- Normal (disk based) table
    Use SQLSeverHelp_IMO 
    go
    CREATE TABLE DiskBasedTable
    (
    iID INT NOT NULL PRIMARY KEY NONCLUSTERED,
    vFName CHAR(20) NOT NULL,
    vLName CHAR(20) NOT NULL
    )
    GO
    
    -- In-Memory table
    Use SQLSeverHelp_IMO
    GO
    CREATE TABLE MyFirstMemporyOptimizedTable
    (
    iID INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000),
    vFName CHAR(20) NOT NULL,
    vLName CHAR(20) NOT NULL
    ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
    GO
    

    Now, I am going to insert 100 rows in Disk based table.

    -- Insert into Disk Based Table
    begin tran
    DECLARE @counter INT 
    SET @counter = 1 
    WHILE @counter <= 100
      BEGIN 
          INSERT INTO DiskBasedTable 
          VALUES      (@counter, 'SQLServer-Help.com','Balmukund Lakhani')
          SET @counter = @counter + 1 
      END
     commit tran
    

    Let’s examine the transaction log by well-known, undocumented (and unsupported command) fn_dblog

    SELECT *
    FROM sys.fn_dblog(NULL, NULL)
    WHERE PartitionId IN (
            SELECT partition_id
            FROM sys.partitions
            WHERE object_id = object_id('DiskBasedTable')
            )
    ORDER BY [Current LSN] ASC;
    GO
    

    As shown below, we can see 200 rows for our table (DiskBasedTable). LOP_INSERT_ROWS operation is logged for HEAP and INDEX. Since we have inserted 100 rows, it has caused 200 log records i.e. two per row. In Operation column – LOP stands for logical operation.

    image

    -- Total size of ALL Log Records 
    SELECT sum([Log Record Length])
    FROM sys.fn_dblog(NULL, NULL)
    WHERE PartitionId IN (
            SELECT partition_id
            FROM sys.partitions
            WHERE object_id = object_id('DiskBasedTable')
            )
    

    Above query shows output as 26000= ~25 KB. Now, we will do the same insert operation for in-memory table (MyFirstMemporyOptimizedTable) and look at log records.

    begin tran
    DECLARE @counter INT 
    SET @counter = 1
    WHILE @counter <= 100
      BEGIN 
          INSERT INTO MyFirstMemporyOptimizedTable
          VALUES      (@counter, 'SQLServer-Help.com','Balmukund Lakhani')
          SET @counter = @counter + 1 
      END
     commit tran
    
    Let’s examine log records for this operation. I am showing only top 3 records which are relevant for our discussion.
    SELECT *
    FROM sys.fn_dblog(NULL, NULL)
    ORDER BY [Current LSN] ASC
    GO
    
     
    image

    As we can see above that we have total three log records for 100 rows inserted. Have a look at length of the LOP_HK log record, its ~6 KB. We can use another undocumented function fn_dblog_xtp to crack-open that record. HK in the operation column is the artifact of feature’s code name Hekaton.

     -- Lets break-apart the LOP_HK
    SELECT [current lsn], [transaction id], operation,
    operation_desc, tx_end_timestamp, total_size,
    object_name(table_id) AS TableName
    FROM sys.fn_dblog_xtp(null, null)
    WHERE [Current LSN] = '0000001f:0000011c:0002'  -- this is same as seen in earlier image
    

    Here is the output in SSMS:

    image

    Top and bottom (Row # 1 and Row # 102) rows are HK_LOP_BEGIN_TX and HK_LOP_COMMIT_TX. HK_LOP_INSERT_ROW is seen 100 times. Above proves the concept that amount of data logged for in-memory table is lesser as compared to disk-based tables.

    Log records are not generated for any modification of Index

    In last screenshot, we can see that we don’t have any log records which contains information about index modification as oppose to disk-based table where we saw LCK_INDEX_LEAF.

    Transaction log records are written only if the transaction on in-memory table is committed

    For In-Memory OLTP transactions, log records are generated only at commit time. As oppose to disk-based table, in-memory OLTP doesn’t follow write-ahead logging (WAL) protocol. Not using WAL protocol makes it highly efficient. To prove this point, we would create a transaction and roll it back. We would again use fn_dblog and count the number of rows.

    Select count(*) from fn_dblog(null, null)
    go
    -- 1406 Rows
    begin tran
    update DiskBasedTable 
    set vFName =
    'ExtremeExperts.com' where iID = 10 rollback tran go Select count(*) from fn_dblog(null, null) go Select * from fn_dblog(null, null) order by [Current LSN] desc

    Below is an update which was rolled back and it has generated 4 log records.

    image

    LOP_BEGIN_XACT stands for begin transaction and LOP_ABORT_XACT stands for rollback/abort of transaction. Now, let’s do the same operation for in-memory table.
    Select count(*) from fn_dblog(null, null)
    go
    -- 1410 Rows
    begin tran
    update MyFirstMemporyOptimizedTable WITH (SNAPSHOT)
    set vFName = 'ExtremeExperts.com'
    where iID = 10
    rollback tran
    go
    Select count(*) from fn_dblog(null, null)
    go
    --1410 Rows
    

    image

    Before and after rows in transaction log file is same (i.e. 1410) which means no additional rows are generated for a uncommitted transaction, if it’s done for in-memory table.

    I hope that now you are clear about the difference in logging. In summary, we can say that logging of transaction on in-memory table is much more efficient as compare to disk based table transactions logging. Stay tuned for more on Hekaton.

  • Cheers,
  • Balmukund Lakhani
  • Twitter @blakhani
  • Author: SQL Server 2012 AlwaysOnPaperback, Kindle
  • Posted in A - Z Series, In Memory OLTP, In-Memory OLTP, SQL Server 2014 | Tagged: , , , , , , , , | 13 Comments »