Help: SQL Server

Sharing my knowlege about SQL Server Troubleshooting Skills

  • Blog Stats

    • 2,050,821 hits
  • Select GETDATE()

    June 2023
    M T W T F S S
     1234
    567891011
    12131415161718
    19202122232425
    2627282930  

Posts Tagged ‘Hekaton’

A-Z of In-Memory OLTP : Performance Monitor Counters

Posted by blakhani on February 13, 2014


Have you ever had a chance to see how athletes train? The coach is constantly watching the stopwatch while the athletes make multiple runs. The stopwatch is used to check the performance of students as they perform the run. In a similar way, SQL Server In-Memory OLTP performance can be monitored using Performance Monitor (PerfMon). In this blog we will discuss about some of the performance counters we can use to monitor various parameters of In-Memory OLTP.

We must call out a difference between the way In-Memory OLTP counters (also known as XTP) vs. traditional counters are laid out in perfmon. If we look at earlier performance counters, the object is Instance specific and under that we have counter (Refer Image#1)

image
Image#1 Layout of Traditional SQL Counters

On the other hand, for XTP the counter objects are listed first and under that we have instance names listed. (Refer Image#2)

image
Image#2 Layout of XTP related SQL Counters

On my machine I only have one default instance and that’s why we see only MSSQLSERVER under “Instance of selected object”. In CTP2, we can see below performance monitor objects.

XTP Cursors
XTP Garbage Collection
XTP Phantom Processor
XTP Transaction Log
XTP Transactions

The list of various counters can be achieved using below SQL Server query.

SELECT * 
FROM sys.dm_os_performance_counters 
WHERE object_name LIKE '%xtp%'

image
Image#3 SQL query to get list of XTP related SQL Counters

The meaning of various counters is available on books online.

To demonstrate one of the theory which we learned earlier, we would use “XTP Transaction Log” counter for demo. In this blog about transaction logging concepts, we read “Transaction log records are written only if the transaction on in-memory table is committed. This would mean that no UNDO information would be logged“.

Here is the information about counters under “XTP Transaction Log” taken from book online

Counter

Description

Log bytes written/sec

The number of bytes written to the SQL Server transaction log by the XTP engine (on average), per second.

Log records written/sec

The number of records written to the SQL Server transaction log by the XTP engine (on average), per second.

 

Here is the script which we can run and capture perform at the same time.

Select Getdate() AS 'begin of Transaction 1'
INSERT INTO [dbo].[MyFirstMemporyOptimizedTable]
    ([iID],[vFName],[vSite])
     VALUES
    (1,'Balmukund','SQLServer-Help.com')
Select Getdate() AS 'End of Transaction 1'

BEGIN TRAN
Select Getdate()  AS 'begin of Transaction 2'
INSERT INTO [dbo].[MyFirstMemporyOptimizedTable]
    ([iID],[vFName],[vSite])
     VALUES
    (2,'Balmukund','SQLServer-Help.com')
-- Wait for 10 second
waitfor delay '00:00:10'
COMMIT TRAN
Select Getdate()  AS 'commit of Transaction 2'

BEGIN TRAN
Select Getdate() AS 'begin of Transaction 3'
INSERT INTO [dbo].[MyFirstMemporyOptimizedTable]
    ([iID],[vFName],[vSite])
     VALUES
    (3,'Balmukund','SQLServer-Help.com')
-- Wait for 10 second
waitfor delay '00:00:10'
ROLLBACK TRAN
Select Getdate() AS 'rollback of Transaction 3'

Here is the output and perfmon data.

image

Image#4 perfmon counter and transaction

Here are the conclusion we can make from the graph.

  • Logging is done at commit time of transaction. We introduced 10 second delay in our script and due to that second spike appears.
  • For uncommitted/rollback of transaction nothing is logged in transaction log.  

Stay tune for learning more on Hekaton / In-Memory OLTP.

  • Cheers,
  • Balmukund Lakhani
  • Twitter @blakhani
  • Author: SQL Server 2012 AlwaysOnPaperback, Kindle
  • Advertisement

    Posted in A - Z Series, In Memory OLTP, In-Memory OLTP, SQL Server 2014 | Tagged: , , , , , , , , , , , , , , , | Leave a Comment »

    A-Z of In-Memory OLTP : Garbage Collection (Part 2)

    Posted by blakhani on February 11, 2014


    In the previous blog of A-Z Series, we started learning about garbage collection; we have learned theory about garbage collection of stale rows in SQL Server In-Memory OLTP tables. If you have not read that, I would like you to start from part 1. There are below DMVs available to check the work done by garbage collection system.

    • sys.dm_xtp_gc_stats
    • sys.dm_xtp_gc_queue_stats
    • sys.dm_db_xtp_gc_cycle_stats – Database Level

    Before we move any forward, there is another important concept which would help in reading DMV data which is called as “generation”. We can think of generation as queue. There are 16 queues (generations) available in SQL Server 2014 CTP2. Generation queue holds the transaction which have generated versions. As per Kalen’s whitepaper, this number (16) can be changed later. The details of generation queue can be seen by DMV sys.dm_db_xtp_gc_cycle_stats. We would also see another queue which is equal to number of schedulers/cores on the machine. This GC worker queue contains the actual work which is queued by garbage collection process and statistics can be seen by DMV sys.dm_xtp_gc_queue_stats.

    On periodic basis, the transactions which have generated version is moved to the generation queue. GC threads picks up transaction from generation queues and keep them in GC worker queue. The frequency of checking completed transaction is 1 minute, which can be more aggressive if more transactions are waiting to be processed. Once the work is queued, the user threads can co-operate with main GC thread and clean the garbage which is kept in worker queue.

    Let’s see this in action. For demo purpose, I have created a wide-column table with around 30K rows. Here is the script which can be used.

    -- Create database with IMO Filegroup, If exists drop it.
    Use Master
    go
    If db_id('SQLSeverHelp_IMO') is NOT NULL
    drop database SQLSeverHelp_IMO
    GO
    CREATE DATABASE SQLSeverHelp_IMO 
        ON PRIMARY (
        NAME = [SQLSeverHelp_IMO_data]
        ,FILENAME = 'C:\IMO_Database\SQLSeverHelp_IMO.mdf'
        )
        
        ,FILEGROUP [SQLSeverHelp_IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA ( 
        NAME = [SQLSeverHelp_IMO_dir]
        ,FILENAME = 'C:\IMO_Database\SQLSeverHelp_IMO_dir'
        ) 
        
        LOG ON (
        NAME = [SQLSeverHelp_IMO_log]
        ,Filename = 'C:\IMO_Database\SQLSeverHelp_IMO_log.ldf'
        ) 
        COLLATE Latin1_General_100_BIN2 
    GO
    
    
    -- Create table in database
    -- use the database which is already created
    Use SQLSeverHelp_IMO
    GO
    CREATE TABLE MyFirstMemporyOptimizedTable
    (
    iID INT NOT NULL,
    vFName CHAR(900) NOT NULL,
    vSite CHAR(7000) NOT NULL,
    CONSTRAINT imo_pk_iID primary key NONCLUSTERED HASH (iID) WITH (BUCKET_COUNT = 1048576),
    index imo_idx_vFname  NONCLUSTERED HASH (vFName) WITH (BUCKET_COUNT = 8)
    ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
    GO
    
    
    Use SQLSeverHelp_IMO 
    GO
    CREATE PROCEDURE [dbo].[InsertName] 
      WITH 
        NATIVE_COMPILATION, 
        SCHEMABINDING, 
        EXECUTE AS OWNER
    AS 
    BEGIN ATOMIC 
      WITH 
    (TRANSACTION ISOLATION LEVEL = SNAPSHOT,
     LANGUAGE = 'us_english')
     
    DECLARE @counter INT 
    SET @counter = 1
    
    WHILE @counter <= 30000
      BEGIN 
          INSERT INTO dbo.MyFirstMemporyOptimizedTable 
          VALUES      (@counter, 
                       'SQLServer-Help.com', 
                       'Balmukund Lakhani');
          INSERT INTO dbo.MyFirstMemporyOptimizedTable 
          VALUES      (@counter + 1, 
                       'ExtremeExperts.com', 
                       'Vinod Kumar M');
          INSERT INTO dbo.MyFirstMemporyOptimizedTable 
          VALUES      (@counter + 2, 
                       'SQLAuthority.Com', 
                       'Pinal Dave') 
    
          SET @counter = @counter + 3 
      END
    END
    GO
    
    exec InsertName
    go
    
    

     

    After creating the objects and pumping the data, I have recycled SQL Services to clear the data in DMVs. It’s worth mentioning that this should not be done with production server. Post restart, I waited for some time so that sys.dm_db_xtp_gc_cycle_stats has some rows. Recall that GC thread starts at approximately 1 min. I have not done any transaction and that’s why “base_generation” stays as 1. We can also see 16 generations as column in each row.

    USE    SQLSeverHelp_IMO
    go
    select    * 
    from    sys.dm_db_xtp_gc_cycle_stats

    image

    Each row corresponds to one cycle when GC has scanned committed transactions. If we try to see the difference between start of two rows, its 60000 ticks which is 60 seconds. I have used CTE in below query to find the same.
    WITH tbldifference 
         AS (SELECT Row_number() 
                      OVER ( 
                        ORDER BY ticks_at_cycle_start) AS RowNumber, 
                    cycle_id, 
                    ticks_at_cycle_start, 
                    base_generation 
             FROM   sys.dm_db_xtp_gc_cycle_stats)
    -- CTE 
    SELECT Cur.cycle_id, 
           Cur.ticks_at_cycle_start                            AS CurrentTickstValue , 
           Prv.ticks_at_cycle_start                            AS PreviousTicksValue, 
           Cur.ticks_at_cycle_start - Prv.ticks_at_cycle_start AS Difference, 
           Cur.base_generation 
    FROM   tbldifference Cur 
           LEFT OUTER JOIN tbldifference Prv 
                        ON Cur.RowNumber = Prv.RowNumber + 1 
    ORDER  BY Cur.ticks_at_cycle_start  
     
    image
     
    If we check server startup time in minutes and number of cycles, they would be same.
    USE SQLSeverHelp_IMO
    go
    select count(*) from sys.dm_db_xtp_gc_cycle_stats
    go
    
    SELECT Datediff(mi, last_startup_time, Getutcdate()) 
           'Minutes Since Last restart' 
    FROM   sys.dm_server_services 
    WHERE  servicename = 'SQL Server (MSSQLSERVER)' 
    
     
    image
     
    Based on above, its clear that cycle is running every minute. We have not performed any update or delete to generate stale rows so far. Here is the memory used by table. Query available here.
    image
     
    Now, let’s generate garbage rows and check the DMVs again. We are going to delete 1000 rows.
     
    DECLARE @i INT 
    SET @i = 1 
    WHILE ( @i <= 1000 ) 
      BEGIN 
          DELETE FROM MyFirstMemporyOptimizedTable 
          WHERE  vFName = 'SQLServer-Help.com' 
                 AND iID <= @i * 3 
          SET @i = @i + 1 
      END   
    
    
     
    As soon as rows are deleted, we would start seeing activity on garbage collection. If we can same query to find cycles again, here is the output
     

    cycle_id             CurrentTickstValue   PreviousTicksValue   Difference           base_generation

    ——————– ——————– ——————– ——————– ——————–

    …. Truncated for clarity….

    41                   92361828             92301828             60000                1

    42                   92421828             92361828             60000                1

    43                   92481828             92421828             60000                1

    44                   92504781             92481828             22953                1

    45                   92564781             92504781             60000                993

    46                   92565281             92564781             500                  993

    47                   92625281             92565281             60000                993

    48                   92685281             92625281             60000                993

    49                   92745281             92685281             60000                993

    50                   92805281             92745281             60000                993

    (50 row(s) affected)

     
    As we can see above that cycle have become more aggressive (less than 60 seconds) as we did a lot of transaction. Also notice that base generation moves in multiple of 16 (explained earlier) 993-1 = 992 = 62*16. I have to do minimum 16 transactions to increase the base generation.
     
    Now, let’s look at output of DMV sys.dm_xtp_gc_queue_stats
     
    image
    We can see that work was enqueued, got dequeued and current_queue_length is zero. This means there is no garbage data in the queue. Let’s check memory usage of table.
    image
    This means garbage collection has done it’s work and free’ed up memory used by table as unused memory has increased.
     
    We would continue to explore more on garbage collection in next blog as well.
     
     
  • Cheers,
  • Balmukund Lakhani
  • Twitter @blakhani
  • Author: SQL Server 2012 AlwaysOnPaperback, Kindle
  • Posted in A - Z Series, In Memory OLTP, In-Memory OLTP, SQL Server 2014 | Tagged: , , , , , , , , , , , | Leave a Comment »

    A-Z of In-Memory OLTP : Garbage Collection (Part 1)

    Posted by blakhani on February 7, 2014


    We all wants our surroundings to be clean. Sometimes we put efforts to keep it clean and some times we rely on someone else to clean keep it clean. In short, none of us like garbage around us. Garbage is something which has been useful in the past but no more useful. On the same lines, computer world also has term garbage (Wikipedia) and to clean it up, there is a concept called Garbage Collection (Wikipedia). Do you know that in earlier version of SQL there was something similar? Yes, it was “ghost cleanup” back ground process. In SQL Server 2014, garbage collection is “similar” work for data in in-memory tables.

    There are two kind of Garbage Collection which exists in In-Memory OLTP:

    • Garbage Collection of Rows in Memory
    • Garbage Collection of Checkpoint Files

    Now you might understand why I called this post as part 1. Yes, we would start from the clean-up of rows in memory.

    In earlier post, we have discussed about row structure and timestamp. If a row is deleted or updated (which is delete + insert internally) we would have different row versions. It would not be a good idea to delete them physically at the same time. Due to this we would have a number of discarded (garbage) rows in memory. To handle such discarded row versions, garbage collection process is introduced. This process periodically cleans up the rows, which are “garbage” at this point.

    To understand below picture, I would like you to go to hash indexes (part 1 and part 2). I have done color coding to make it easy to read.

    image
    Image#1 Rows 1 and 4 are garbage at this point.

    In above we have end timestamp available for row 1 (Balmukund, Microsoft) and row 3 (PinalD, PluralSight). Notice First and forth row – End Timestamp for Row 1 is Begin Timestamp for Row 4. This would mean that we have fired “Update” and Modified Balmukund to SQLServer at timestamp 300. On the other hand, row 3 was deleted from the system via “Delete” statement at timestamp 170. As indicated in picture, the lowest active transaction time stamp is 350, which means that there is no “active” transaction in the SQL Server whose timestamp is less than 350. Recall that the life of a row is determined by begin and end timestamp. Since all the transactions are above timestamp 350, there is no transaction which would “need” those rows. This would mean that rows 1 and 3 are eligible for the garbage collection. These rows can be removed and their memory can be released back to the system.

    The garbage collection system is designed to be non-blocking, cooperative, efficient, responsive and scalable. (taken from Hekaton whitepaper). Let’s me try to explain reason of those words:

    • Non-Blocking: In some systems, when garbage collection activity starts, there is a huge work done and other “real” workload becomes to halt. This is NOT true with SQL Server In-Memory OLTP. It happens in the back ground all the time.
    • Cooperative: In SQL Server In-Memory OLTP, garbage collection is not the responsibility of dedicated system thread alone. All other users threads who are doing reading/writing of data also do a little piece of over all garbage collection work.
    • Efficient: A regular reader thread would do work if it find a row which is stale/garbage.
    • Responsive: Garbage collection process responds to memory pressure. Recall, lazywriter thread in traditional SQL world.
    • Scalable: Work of Garbage collection if shared. Any transaction which is completed would look at the queue and would de-queue the work.

    All above points would be clear if you read further.

    Let’s assume that there is a select statement fired which is using hash Index created on FName column (for example select * from table where FName = ‘SQLServer’). Since hash value of column is 9, it would scan all rows of the bucket. While doing that it would find a row (Balmukund, Microsoft) which is stale (Refer Image#1). The reader thread would delink that row from the chain and move forward. Note that the row is still NOT deleted from memory but a little work of garbage collection has been done. Here is the after image after select query.

    image
    Image#2 Cooperative work done by User thread doing scan of the bucket.

    You might say that what about red chain above (bucket 9)? That would be cleaned by regular garbage collection thread if user thread is not doing it. This means that if we traverse using Index on company and scanning bucket of hash value 9, it would be delinked by user thread else system thread would do it. Below would be the final state of the system once all garbage is cleaned and memory is released.

    image
    Image#3 Both rows cleaned by garbage collection.

    This means that garbage collection is not a discrete process in SQL Server In-Memory OLTP. It can be done by dedicated system thread and also cooperatively done by user threads. This make the garbage collection more efficient.

    Hope you have learned something new today! In next blog of the series, we would see the DMVs which can be used to see garbage collection in action.

  • Cheers,
  • Balmukund Lakhani
  • Twitter @blakhani
  • Author: SQL Server 2012 AlwaysOnPaperback, Kindle
  • Posted in A - Z Series, In Memory OLTP, In-Memory OLTP, SQL Server 2014 | Tagged: , , , , , , , , , , , | 1 Comment »