Help: SQL Server

Sharing my knowlege about SQL Server Troubleshooting Skills

  • Blog Stats

    • 2,153,599 hits
  • Select GETDATE()

    December 2025
    M T W T F S S
    1234567
    891011121314
    15161718192021
    22232425262728
    293031  

Archive for the ‘Hekaton Series’ Category

A-Z of In-Memory OLTP : Hash Indexes (Part 2)

Posted by blakhani on January 14, 2014


Imagine that you are standing in front of a multistoried apartment and don’t know the apartment where you friend resides (find 1 out of 100). It would be tough to knock each and every door to check if the house belongs to your friend (perform a Scan). What if there is a directory (Index) with the security guard having name and apartment number? Easy? Each of the apartment is a row and you know the predicate (where clause). Indexes in any technology (and real life as well) are used for faster access to the data.

In the previous post of the A-Z Series, we have seen how a single hash index looks like and learned about hash collision. In this post let’s go one step further understand how two hash indexes on the same table would look like. To make picture more intuitive, I have added color to the column matching with the bucket they belong to. Same as earlier, I am using LEN function as hash function.

image

Along with earlier index on fName column, we now have another on Company column. If we try to compare above image with single hash index image in previous blog, it’s clear that now we have “Pointer 2 added” in the row header area of the row (read here). As we can see above, we have three rows falling into same bucket for company column. The bucket of hash index would point to first row of hash bucket <Balmukund, Microsoft>. Due to collision on company Microsoft, there would be chain pointers in the existing row to point to next row of the same bucket (red color arrows in above image)

In above picture, we have assumed that there is no deletion happened and that’s why we see ∞ (infinity) in End Timestamp for all four rows. This means that all rows are valid for timestamp 300 onwards (begin timestamp is max 300). If delete or update (=delete + insert) happens for a row then as described in earlier blog, the timestamp would be closed for deleted data and new row would be created with new begin timestamp. Lets assume that we fired a update statement to modify Balmukund’s company to Ramco at time stamp 350. We would put end timestamp as 350 for <Balmukund, Microsoft> row and insert new row with <Balmukund , Ramco>. All pointers need modification. Since LEN(Ramco) = 5 and there is no hash collision, new pointer is added.

image

Later when garbage collection happens, first row <Balmukund, Microsoft> would be removed and pointer would be modified.

image

You may ask – Is there any way to find how many hash buckets we have and do we have collision? Yes, of course! SQL Server has DMV dm_db_xtp_hash_index_stats available to help us investigate. Let’s use the script to understand this concept.

-- Create database with IMO Filegroup, If exists drop it.
Use Master
go
If db_id('SQLSeverHelp_IMO') is NOT NULL
drop database SQLSeverHelp_IMO
GO
CREATE DATABASE SQLSeverHelp_IMO 
    ON PRIMARY (
    NAME = [SQLSeverHelp_IMO_data]
    ,FILENAME = 'C:\IMO_Database\SQLSeverHelp_IMO.mdf'
    )
    
    ,FILEGROUP [SQLSeverHelp_IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA ( 
    NAME = [SQLSeverHelp_IMO_dir]
    ,FILENAME = 'C:\IMO_Database\SQLSeverHelp_IMO_dir'
    ) 
    
    LOG ON (
    NAME = [SQLSeverHelp_IMO_log]
    ,Filename = 'C:\IMO_Database\SQLSeverHelp_IMO_log.ldf'
    ) 
    COLLATE Latin1_General_100_BIN2 
GO
-- Create table in database
-- use the database which is already created
Use SQLSeverHelp_IMO
GO
CREATE TABLE MyFirstMemporyOptimizedTable
(
    iID INT NOT NULL,
    vFName VARCHAR(20) NOT NULL,
    vLName VARCHAR(20) NOT NULL,
    CONSTRAINT imo_pk_iID primary key NONCLUSTERED HASH (iID) WITH (BUCKET_COUNT = 200),
    index imo_idx_vFname  NONCLUSTERED HASH (vFName) WITH (BUCKET_COUNT = 200)
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO

 

I have given bucket count as 200. can you guess how much bucket SQL Sever is going to create? If you can’t answer then you have not read part 1 of hash indexes, which I mentioned in beginning. SQL is going to put 256 buckets for both indexes. Let’s verify!

Select  name 'Index Name', 
        object_name( object_id) 'Table Name',
        bucket_count 'Number of Buckets'
from    sys.hash_indexes
order by 2, 1 asc

Index Name        Table Name                     Number of Buckets

—————– —————————— —————–

imo_idx_vFname    MyFirstMemporyOptimizedTable   256

imo_pk_iID        MyFirstMemporyOptimizedTable   256

(2 row(s) affected)

Let’s insert 90000 rows in the table using natively complied procedure as below.

Use SQLSeverHelp_IMO 
GO
CREATE PROCEDURE [dbo].[InsertName] 
  WITH 
    NATIVE_COMPILATION, 
    SCHEMABINDING, 
    EXECUTE AS OWNER
AS 
BEGIN ATOMIC 
  WITH 
(TRANSACTION ISOLATION LEVEL = SNAPSHOT,
 LANGUAGE = 'us_english')
 
DECLARE @counter INT 
SET @counter = 1

WHILE @counter <= 90000 
  BEGIN 
      INSERT INTO dbo.MyFirstMemporyOptimizedTable 
      VALUES      (@counter, 
                   'SQLServer-Help.com', 
                   'Balmukund Lakhani');
      INSERT INTO dbo.MyFirstMemporyOptimizedTable 
      VALUES      (@counter + 1, 
                   'ExtremeExperts.com', 
                   'Vinod Kumar M');
      INSERT INTO dbo.MyFirstMemporyOptimizedTable 
      VALUES      (@counter + 2, 
                   'Blogs.msdn.com', 
                   'Balmukund Lakhani') 

      SET @counter = @counter + 3 
  END
END
GO

exec InsertName
go

 

Now, let’s examine hash index statistics.

SELECT Object_name(his.object_id)        'Table Name', 
       idx.name                            'Index Name', 
       total_bucket_count                'total buckets', 
       empty_bucket_count                'empty buckets',            
       total_bucket_count - empty_bucket_count as 'used buckets', 
       avg_chain_length                    'avg chain length', 
       max_chain_length                    'max chain length', 
       90000                                   as 'Rows - hardcoded value' 
FROM   sys.dm_db_xtp_hash_index_stats as his 
       JOIN sys.indexes as idx 
         ON his.object_id = idx.object_id 
            AND his.index_id = idx.index_id;   

Table Name                      Index Name       total buckets  empty buckets  used buckets  avg chain length  max chain length Rows – hardcoded value

——————————- —————- ————– ————– ————- —————– —————  ———————-

MyFirstMemporyOptimizedTable    imo_idx_vFname   256            253            3             30000             30000            90000

MyFirstMemporyOptimizedTable    imo_pk_iID       256            0              256           351               355              90000

(2 row(s) affected)

image

Chain length in conjunction with buckets in the output tells that for index imo_idx_vFname, we have only three bucket and each bucket has 30000 entries. If we go back and examine the stored procedure, we are inserting only three values in loop of 30000 for vFName column. Whereas for the other index imo_pk_iID, we don’t have any free bucket and chain length is more for each bucket. This is the right candidate for more number of buckets.  Typical value of bucket count is between 1 to 2 times of distinct values on that column. Remember that bucket count can’t be change on the fly – whole table needs to be dropped and recreated.

Books online references:sys.dm_db_xtp_hash_index_stats and Determining the Correct Bucket Count for Hash Indexes

Hope you have already downloaded SQL 2014 CTP2 and following this series with me!

  • Cheers,
  • Balmukund Lakhani
  • Twitter @blakhani
  • Author: SQL Server 2012 AlwaysOnPaperback, Kindle
  • Posted in A - Z Series, Hekaton Series, In Memory OLTP, In-Memory OLTP, SQL Server 2014 | Tagged: , , , , , , , , , , , , , , , | 7 Comments »

    A-Z of In-Memory OLTP : Checkpoint – Part 2

    Posted by blakhani on December 24, 2013


    If you are reading this post before part 1, I would like you to read part 1 so that you have context of what you are reading. In part 1, we have seen demo and usage of DATA files. In this blog I am going to talk about DELTA files.

    As we saw earlier each DELTA file is paired with DATA file. I must call out that DATA and DELTA files are not related to a single table but they cover a transaction range (In fact, this was a question asked by one of the reader, Manoj (Blog) in my last post). In in-memory world, when a record is deleted from the table, it’s not removed from the DATA file but a identifier is inserted in DELTA file. This would tell SQL Server that during loading of the data (startup of database), don’t load this particular row in memory as it’s deleted. Now, imagine a situation when we are doing an update to a row.

    Let’s see this theory in action. I have used same script to populate the data and here is the position where we left in last blog. Here is the query

    Select count(*) from MyFirstMemporyOptimizedTable
    go
    SELECT internal_storage_slot,
           file_id, 
           pair_file_id,    
           file_type_desc,
           file_size_in_bytes / 1024.0 / 1024.0      [File Size in MB], 
           file_size_used_in_bytes / 1024.0 / 1024.0 [File Used in MB], 
           total_bytes / 1024.0 / 1024.0             [Total MB], 
           inserted_row_count ,
           deleted_row_count
    FROM   sys.dm_db_xtp_checkpoint_files 
    WHERE  total_bytes > 0        -- non zero size
    and       internal_storage_slot is not null 
    order by 1 desc
    

     

    image

    Now, I am going to delete one third of the rows and make sure to fire a manual checkpoint.

    Delete from MyFirstMemporyOptimizedTable
    where iID%3 = 0
    go
    checkpoint
    go
    
    ----------------------
    (3334 row(s) affected)

    If we run the old query again.

    image

    Here comes the DELTA files. Note that the internal_slot_id is zero for one pair and one for new DATA file.  After sometime, if we run the exact same query again, we would see a different output.

    image

    If we look at the file_id, its changed from the previous screenshot. It sounds like a checkpoint has fired by itself and has removed the reference of DELTA file. Current DATA file shows 6668 rows which is exact rows in table. There is no need of DELTA file here.

    Now, I am going to fire an update statement and modify all the rows and see the effect.

    update MyFirstMemporyOptimizedTable
    set vFName = 'www.ExtremeExperts.com'
    where vFName = 'ExtremeExperts.com'
    go
    checkpoint
    go
    


    ——————

    (3334 row(s) affected)

    image

    image

    This matches with the theory that UPDATE is split into DELETE and INSERT. We Updated 3334 rows, which caused append of 3334 rows into DELTA file and append of 3334 rows to DATA file.

    In above, “after update” we can see three DATA files and one DELTA file. Let’s fire one more query to understand the transaction range. Here is the query

    SELECT internal_storage_slot,
           file_id, 
           pair_file_id,    
           file_type_desc,
           inserted_row_count ,
           deleted_row_count,
           transaction_id_lower_bound,
           transaction_id_upper_bound
    FROM   sys.dm_db_xtp_checkpoint_files 
    WHERE  total_bytes > 0        -- non zero size
    and       internal_storage_slot is not null 
    order by 1 desc
    

     

    image

    Each internal_storage_slot is for a range of transaction. Lower bound and upper bound are matching for consecutive slots.

    It is important to recall that writing (appending to be precise) to DELTA and DATA file is NOT done by any user transaction but background checkpoint thread. In our tests we are firing manual checkpoint to see the immediate effect to learn which would not be the case in production server. The background thread, scans the log buffers (and if needed, transaction log file) to append to those files. As per Hekaton Whitepaper by Kalen, below are the steps done by checkpoint.

    1. All buffered writes are flushed to the data and delta files.

    2. A checkpoint inventory is constructed that includes descriptors for all files from the previous checkpoint plus any files added by the current checkpoint. The inventory is hardened to the transaction log.

    3. The location of the inventory is stored in the transaction log so that it is available at recovery time.

    Till, I wrote last few lines, another system process happened and here is the final output.

    image

    We would discuss about merge process in later part of the series.

    Hope you have already downloaded SQL Server 2014 CTP2 and learning In-Memory OLTP (a.k.a. Hekaton) with me. If you haven’t yet, be sure to check out the SQL Server 2014 CTP2 so that you can follow this series and get your hands dirty.

  • Cheers,
  • Balmukund Lakhani
  • Twitter @blakhani
  • Author: SQL Server 2012 AlwaysOnPaperback, Kindle
  • Posted in A - Z Series, Hekaton Series, In Memory OLTP, In-Memory OLTP, SQL Server 2014 | Tagged: , , , , , , , , | 3 Comments »

    A-Z of In-Memory OLTP : Checkpoint – Part 1

    Posted by blakhani on December 19, 2013


    In today’s advance world, RAM is still volatile. Once you pull the power plug out, all the content of RAM is flushed. Since we are talking about in-memory tables, what would happen to data in tables? If we have defined a table as durable (SCHEMA_AND_DATA) in SQL Server, durability has to be maintained. During recovery of the database (also known as crash recovery), all the content should come back. To achieve this, SQL Server does logging into transaction log and writes checkpoint for in-memory tables.

    If you are a SQL Server DBA and worked with earlier versions of SQL, checkpoint is definitely known to you. You can read more about earlier version of checkpoint in books online. In short, checkpoint is a process which writes dirty pages from memory to data file. This is done to reduce the recovery time of the database during crash recovery. Checkpoint can be done by system thread based on recovery interval or triggered on demand when system is low on memory, or other operations such as log backup etc.

    “Similar” concept of checkpoint exists for In-Memory tables as well. I quoted similar because mechanism is different than the known checkpoint process for disk-based tables. To maintain durability of data for in-memory tables, SQL Server does logging of changes in transaction log (which is also different than disk based tables) and writes information during checkpoint. In this blog, we would try to understand the various things which happens during checkpoint. We should remember that transaction logging is sufficient to recover the data but if there is no checkpoint then we need to read complete transaction log during crash recovery, which would be inefficient. So, periodically checkpoint is fired and data is dumped into checkpoint files. Now, during recovery, we recover from checkpoint and replay only that portion of transaction log which is after checkpoint.

    Checkpoint files are created into filegroup MEMORY_OPTIMIZED_DATA which is container for these files. During checkpoint, we try to use log buffer in memory to write information into checkpoint files. There are chances that log buffer is not having sufficient log blocks and then we would end-up in reading transaction log file. SQL Server uses FILESTREAM as underlying storage mechanism for checkpoint files. The checkpoint information is written into the disk by checkpoint streams, which comes in two flavors:

    • DATA streams:
      • These file are approximately 128 MB in size.
      • They contain only inserted rows of the table in the given timestamp range.
    • DELTA streams:
      • These are paired (1:1) with DATA files.
      • Size of these file is not constant.
      • Store the deleted rows with IDs.

    Note that both files are “append only” files which means the IO would always be sequential. As soon as table is created, DATA files are pre-allocated with 128 MB size. DATA and DELTA files form a pair for a timestamp range. Once current DATA file becomes full, SQL Server would switch to new DATA file.

    Enough of theory? Let’s see a checkpoint in action.

    For demo purpose, I am creating table in the database which is having wide row. Here is the complete script.

    -- Create database having container C:\IMO_Database\SQLSeverHelp_IMO_dir
    CREATE DATABASE SQLSeverHelp_IMO 
        ON PRIMARY (
        NAME = [SQLSeverHelp_IMO_data]
        ,FILENAME = 'C:\IMO_Database\SQLSeverHelp_IMO.mdf'
        )
        
        ,FILEGROUP [SQLSeverHelp_IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA ( 
        NAME = [SQLSeverHelp_IMO_dir]
        ,FILENAME = 'C:\IMO_Database\SQLSeverHelp_IMO_dir'
        ) 
        
        LOG ON (
        NAME = [SQLSeverHelp_IMO_log]
        ,Filename = 'C:\IMO_Database\SQLSeverHelp_IMO_log.ldf'
        ) 
    GO
    
    
    -- Create In-Memory table
    Use SQLSeverHelp_IMO
    GO
    CREATE TABLE MyFirstMemporyOptimizedTable
    (
    iID INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000),
    vFName CHAR(4000) NOT NULL,
    vLName CHAR(4000) NOT NULL
    ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
    GO

    As soon as table is created, we would see DATA and DELTA file in the container. We can use DMV sys.dm_db_xtp_checkpoint_files to see the information. Let’s first run a very basic query to understand the files.

    SELECT file_id, 
           pair_file_id, 
           file_type_desc, 
           is_active 
    FROM   sys.dm_db_xtp_checkpoint_files 
    ORDER  BY is_active DESC;   
    


    Here is the output with some highlights. I have put the same color for the matching file_id with pair_file_id column for clarity.

    image

    As we can see in output, DATA and DELTA files are having pair. I have highlighted only two pairs but if you look carefully, all files are paired. Note that the number of files created is a function of number of CPU cores on the system. My machine has 8 logical CPU so I am seeing 9 pairs. Is_Active column represents whether the files is currently in use or not. Another way to find out paired files is to use column called internal_storage_slot. Only those files which are having not null slot would be used by checkpoint.

    SELECT internal_storage_slot,
           file_id, 
           pair_file_id,    
           file_type_desc
    FROM   sys.dm_db_xtp_checkpoint_files 
    WHERE  internal_storage_slot is NOT NULL

    Here is the output. Two files having same slot form a pair of DATA-DELTA (slot 3 in below screenshot).

    image

    Here is how the physical files on operating system look like:

    image

    Now, you may ask – how would I associate the file in the OS with the file_id and pair_file_Id column in sys.dm_db_xtp_checkpoint_files? Let’s use other columns of DMV to explore that information.

    image

    I have tried to make the picture self-explanatory. Highlight with the same color are the matching texts.

    Now, lets try and check what happens when we insert some rows in the table. As we discussed earlier, DATA file would contain the rows which are inserted and DELTA file would have entries of deleted rows. I have used below script to load ~10K rows into the table.

    DECLARE @counter INT 
    SET @counter = 0 
    WHILE @counter <= 10000 
      BEGIN 
          INSERT INTO myfirstmemporyoptimizedtable 
          VALUES      (@counter, 
                       'SQLServer-Help.com', 
                       'Balmukund Lakhani'), 
                      (@counter + 1, 
                       'ExtremeExperts.com', 
                       'Vinod Kumar M'), 
                      (@counter + 2, 
                       'SQLAuthority.com', 
                       'Pinal Dave') 
    
          SET @counter = @counter + 3 
      END

    Now, Let’s go ahead and query the DMV again to see if we have something interesting! But Wait! We don’t know if checkpoint has automatically fired or not. So before running the query, I fired manual checkpoint via “CHECKPOINT” command in management studio.

    SELECT internal_storage_slot,
           file_id, 
           pair_file_id,    
           file_type_desc,
           file_size_in_bytes / 1024.0 / 1024.0      [File Size in MB], 
           file_size_used_in_bytes / 1024.0 / 1024.0 [File Used in MB], 
           total_bytes / 1024.0 / 1024.0             [Total MB], 
           inserted_row_count 
     FROM   sys.dm_db_xtp_checkpoint_files 
     WHERE  total_bytes > 0        -- non zero size
     and       internal_storage_slot is not null 

    As we can see that at this point only one DATA file is having non-zero size and number of rows shown are 10002, which were inserted earlier.

    image

    As we insert more and more rows, data would be accumulated in DATA file and once its full, next file would be opened used and current file becomes read-only. Please remember that 128 MB is NOT the limit of the checkpoint DATA file. A transaction can’t span across two DATA files, so a bigger transaction could cause a file to go beyond 128 MB size. For demo purposes, instead of 10K rows, I have inserted ~40K rows in one transaction.

    image 

    If you are wondering why we are not seeing any DELTA file in any of the query output then recall that DELTA file would contain the DELETEed rows identifier and so far I have been doing only INSERTs. To give you some time to grasp information in this blog, I would stop here and we would discuss other aspects of checkpoint in next blog for this A-Z series.

    In case you have any question, post them in comment section and I would be happy to answer them.

  • Cheers,
  • Balmukund Lakhani
  • Twitter @blakhani
  • Author: SQL Server 2012 AlwaysOnPaperback, Kindle
  • Posted in Hekaton Series, In Memory OLTP, In-Memory OLTP, SQL Server 2014 | Tagged: , , , , , , , , | 11 Comments »