Help: SQL Server

Sharing my knowlege about SQL Server Troubleshooting Skills

  • Blog Stats

    • 2,050,846 hits
  • Select GETDATE()

    December 2013
    M T W T F S S
     1
    2345678
    9101112131415
    16171819202122
    23242526272829
    3031  

A-Z of In-Memory OLTP : Checkpoint – Part 2

Posted by blakhani on December 24, 2013


If you are reading this post before part 1, I would like you to read part 1 so that you have context of what you are reading. In part 1, we have seen demo and usage of DATA files. In this blog I am going to talk about DELTA files.

As we saw earlier each DELTA file is paired with DATA file. I must call out that DATA and DELTA files are not related to a single table but they cover a transaction range (In fact, this was a question asked by one of the reader, Manoj (Blog) in my last post). In in-memory world, when a record is deleted from the table, it’s not removed from the DATA file but a identifier is inserted in DELTA file. This would tell SQL Server that during loading of the data (startup of database), don’t load this particular row in memory as it’s deleted. Now, imagine a situation when we are doing an update to a row.

Let’s see this theory in action. I have used same script to populate the data and here is the position where we left in last blog. Here is the query

Select count(*) from MyFirstMemporyOptimizedTable
go
SELECT internal_storage_slot,
       file_id, 
       pair_file_id,    
       file_type_desc,
       file_size_in_bytes / 1024.0 / 1024.0      [File Size in MB], 
       file_size_used_in_bytes / 1024.0 / 1024.0 [File Used in MB], 
       total_bytes / 1024.0 / 1024.0             [Total MB], 
       inserted_row_count ,
       deleted_row_count
FROM   sys.dm_db_xtp_checkpoint_files 
WHERE  total_bytes > 0        -- non zero size
and       internal_storage_slot is not null 
order by 1 desc

 

image

Now, I am going to delete one third of the rows and make sure to fire a manual checkpoint.

Delete from MyFirstMemporyOptimizedTable
where iID%3 = 0
go
checkpoint
go
----------------------
(3334 row(s) affected)

If we run the old query again.

image

Here comes the DELTA files. Note that the internal_slot_id is zero for one pair and one for new DATA file.  After sometime, if we run the exact same query again, we would see a different output.

image

If we look at the file_id, its changed from the previous screenshot. It sounds like a checkpoint has fired by itself and has removed the reference of DELTA file. Current DATA file shows 6668 rows which is exact rows in table. There is no need of DELTA file here.

Now, I am going to fire an update statement and modify all the rows and see the effect.

update MyFirstMemporyOptimizedTable
set vFName = 'www.ExtremeExperts.com'
where vFName = 'ExtremeExperts.com'
go
checkpoint
go


——————

(3334 row(s) affected)

image

image

This matches with the theory that UPDATE is split into DELETE and INSERT. We Updated 3334 rows, which caused append of 3334 rows into DELTA file and append of 3334 rows to DATA file.

In above, “after update” we can see three DATA files and one DELTA file. Let’s fire one more query to understand the transaction range. Here is the query

SELECT internal_storage_slot,
       file_id, 
       pair_file_id,    
       file_type_desc,
       inserted_row_count ,
       deleted_row_count,
       transaction_id_lower_bound,
       transaction_id_upper_bound
FROM   sys.dm_db_xtp_checkpoint_files 
WHERE  total_bytes > 0        -- non zero size
and       internal_storage_slot is not null 
order by 1 desc

 

image

Each internal_storage_slot is for a range of transaction. Lower bound and upper bound are matching for consecutive slots.

It is important to recall that writing (appending to be precise) to DELTA and DATA file is NOT done by any user transaction but background checkpoint thread. In our tests we are firing manual checkpoint to see the immediate effect to learn which would not be the case in production server. The background thread, scans the log buffers (and if needed, transaction log file) to append to those files. As per Hekaton Whitepaper by Kalen, below are the steps done by checkpoint.

1. All buffered writes are flushed to the data and delta files.

2. A checkpoint inventory is constructed that includes descriptors for all files from the previous checkpoint plus any files added by the current checkpoint. The inventory is hardened to the transaction log.

3. The location of the inventory is stored in the transaction log so that it is available at recovery time.

Till, I wrote last few lines, another system process happened and here is the final output.

image

We would discuss about merge process in later part of the series.

Hope you have already downloaded SQL Server 2014 CTP2 and learning In-Memory OLTP (a.k.a. Hekaton) with me. If you haven’t yet, be sure to check out the SQL Server 2014 CTP2 so that you can follow this series and get your hands dirty.

  • Cheers,
  • Balmukund Lakhani
  • Twitter @blakhani
  • Author: SQL Server 2012 AlwaysOnPaperback, Kindle
  • Advertisement

    3 Responses to “A-Z of In-Memory OLTP : Checkpoint – Part 2”

    1. […] checkpoint files and transaction log of database. In my previous blog of checkpoint (part 1 and part 2), I have explained about DATA and DELTA […]

    2. […] when a row is inserted? Well, it’s a special value which is referred at infinity. In my earlier post, we have learned that UPDATE = DELETE + INSERT. This means that if any update happens, we […]

    3. […] While preparing my session about In-Memory OLTP for internal audience, I learned something new about placement of checkpoint files. In case you don’t know about checkpoint files, I would like you to read about checkpoint (part 1 and part 2). […]

    Leave a Reply

    Fill in your details below or click an icon to log in:

    WordPress.com Logo

    You are commenting using your WordPress.com account. Log Out /  Change )

    Facebook photo

    You are commenting using your Facebook account. Log Out /  Change )

    Connecting to %s

    This site uses Akismet to reduce spam. Learn how your comment data is processed.

     
    %d bloggers like this: