Help: SQL Server

Sharing my knowlege about SQL Server Troubleshooting Skills

    Advertisements
  • Blog Stats

    • 1,632,122 hits
  • Select GETDATE()

    June 2019
    M T W T F S S
    « Apr    
     12
    3456789
    10111213141516
    17181920212223
    24252627282930
  • Advertisements

Posts Tagged ‘database’

A-Z of In-Memory OLTP : Garbage Collection (Part 2)

Posted by blakhani on February 11, 2014


In the previous blog of A-Z Series, we started learning about garbage collection; we have learned theory about garbage collection of stale rows in SQL Server In-Memory OLTP tables. If you have not read that, I would like you to start from part 1. There are below DMVs available to check the work done by garbage collection system.

  • sys.dm_xtp_gc_stats
  • sys.dm_xtp_gc_queue_stats
  • sys.dm_db_xtp_gc_cycle_stats – Database Level

Before we move any forward, there is another important concept which would help in reading DMV data which is called as “generation”. We can think of generation as queue. There are 16 queues (generations) available in SQL Server 2014 CTP2. Generation queue holds the transaction which have generated versions. As per Kalen’s whitepaper, this number (16) can be changed later. The details of generation queue can be seen by DMV sys.dm_db_xtp_gc_cycle_stats. We would also see another queue which is equal to number of schedulers/cores on the machine. This GC worker queue contains the actual work which is queued by garbage collection process and statistics can be seen by DMV sys.dm_xtp_gc_queue_stats.

On periodic basis, the transactions which have generated version is moved to the generation queue. GC threads picks up transaction from generation queues and keep them in GC worker queue. The frequency of checking completed transaction is 1 minute, which can be more aggressive if more transactions are waiting to be processed. Once the work is queued, the user threads can co-operate with main GC thread and clean the garbage which is kept in worker queue.

Let’s see this in action. For demo purpose, I have created a wide-column table with around 30K rows. Here is the script which can be used.

-- Create database with IMO Filegroup, If exists drop it.
Use Master
go
If db_id('SQLSeverHelp_IMO') is NOT NULL
drop database SQLSeverHelp_IMO
GO
CREATE DATABASE SQLSeverHelp_IMO 
    ON PRIMARY (
    NAME = [SQLSeverHelp_IMO_data]
    ,FILENAME = 'C:\IMO_Database\SQLSeverHelp_IMO.mdf'
    )
    
    ,FILEGROUP [SQLSeverHelp_IMO_FG] CONTAINS MEMORY_OPTIMIZED_DATA ( 
    NAME = [SQLSeverHelp_IMO_dir]
    ,FILENAME = 'C:\IMO_Database\SQLSeverHelp_IMO_dir'
    ) 
    
    LOG ON (
    NAME = [SQLSeverHelp_IMO_log]
    ,Filename = 'C:\IMO_Database\SQLSeverHelp_IMO_log.ldf'
    ) 
    COLLATE Latin1_General_100_BIN2 
GO


-- Create table in database
-- use the database which is already created
Use SQLSeverHelp_IMO
GO
CREATE TABLE MyFirstMemporyOptimizedTable
(
iID INT NOT NULL,
vFName CHAR(900) NOT NULL,
vSite CHAR(7000) NOT NULL,
CONSTRAINT imo_pk_iID primary key NONCLUSTERED HASH (iID) WITH (BUCKET_COUNT = 1048576),
index imo_idx_vFname  NONCLUSTERED HASH (vFName) WITH (BUCKET_COUNT = 8)
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO


Use SQLSeverHelp_IMO 
GO
CREATE PROCEDURE [dbo].[InsertName] 
  WITH 
    NATIVE_COMPILATION, 
    SCHEMABINDING, 
    EXECUTE AS OWNER
AS 
BEGIN ATOMIC 
  WITH 
(TRANSACTION ISOLATION LEVEL = SNAPSHOT,
 LANGUAGE = 'us_english')
 
DECLARE @counter INT 
SET @counter = 1

WHILE @counter <= 30000
  BEGIN 
      INSERT INTO dbo.MyFirstMemporyOptimizedTable 
      VALUES      (@counter, 
                   'SQLServer-Help.com', 
                   'Balmukund Lakhani');
      INSERT INTO dbo.MyFirstMemporyOptimizedTable 
      VALUES      (@counter + 1, 
                   'ExtremeExperts.com', 
                   'Vinod Kumar M');
      INSERT INTO dbo.MyFirstMemporyOptimizedTable 
      VALUES      (@counter + 2, 
                   'SQLAuthority.Com', 
                   'Pinal Dave') 

      SET @counter = @counter + 3 
  END
END
GO

exec InsertName
go

 

After creating the objects and pumping the data, I have recycled SQL Services to clear the data in DMVs. It’s worth mentioning that this should not be done with production server. Post restart, I waited for some time so that sys.dm_db_xtp_gc_cycle_stats has some rows. Recall that GC thread starts at approximately 1 min. I have not done any transaction and that’s why “base_generation” stays as 1. We can also see 16 generations as column in each row.

USE    SQLSeverHelp_IMO
go
select    * 
from    sys.dm_db_xtp_gc_cycle_stats

image

Each row corresponds to one cycle when GC has scanned committed transactions. If we try to see the difference between start of two rows, its 60000 ticks which is 60 seconds. I have used CTE in below query to find the same.
WITH tbldifference 
     AS (SELECT Row_number() 
                  OVER ( 
                    ORDER BY ticks_at_cycle_start) AS RowNumber, 
                cycle_id, 
                ticks_at_cycle_start, 
                base_generation 
         FROM   sys.dm_db_xtp_gc_cycle_stats)
-- CTE 
SELECT Cur.cycle_id, 
       Cur.ticks_at_cycle_start                            AS CurrentTickstValue , 
       Prv.ticks_at_cycle_start                            AS PreviousTicksValue, 
       Cur.ticks_at_cycle_start - Prv.ticks_at_cycle_start AS Difference, 
       Cur.base_generation 
FROM   tbldifference Cur 
       LEFT OUTER JOIN tbldifference Prv 
                    ON Cur.RowNumber = Prv.RowNumber + 1 
ORDER  BY Cur.ticks_at_cycle_start  
 
image
 
If we check server startup time in minutes and number of cycles, they would be same.
USE SQLSeverHelp_IMO
go
select count(*) from sys.dm_db_xtp_gc_cycle_stats
go

SELECT Datediff(mi, last_startup_time, Getutcdate()) 
       'Minutes Since Last restart' 
FROM   sys.dm_server_services 
WHERE  servicename = 'SQL Server (MSSQLSERVER)' 
 
image
 
Based on above, its clear that cycle is running every minute. We have not performed any update or delete to generate stale rows so far. Here is the memory used by table. Query available here.
image
 
Now, let’s generate garbage rows and check the DMVs again. We are going to delete 1000 rows.
 
DECLARE @i INT 
SET @i = 1 
WHILE ( @i <= 1000 ) 
  BEGIN 
      DELETE FROM MyFirstMemporyOptimizedTable 
      WHERE  vFName = 'SQLServer-Help.com' 
             AND iID <= @i * 3 
      SET @i = @i + 1 
  END   

 
As soon as rows are deleted, we would start seeing activity on garbage collection. If we can same query to find cycles again, here is the output
 

cycle_id             CurrentTickstValue   PreviousTicksValue   Difference           base_generation

——————– ——————– ——————– ——————– ——————–

…. Truncated for clarity….

41                   92361828             92301828             60000                1

42                   92421828             92361828             60000                1

43                   92481828             92421828             60000                1

44                   92504781             92481828             22953                1

45                   92564781             92504781             60000                993

46                   92565281             92564781             500                  993

47                   92625281             92565281             60000                993

48                   92685281             92625281             60000                993

49                   92745281             92685281             60000                993

50                   92805281             92745281             60000                993

(50 row(s) affected)

 
As we can see above that cycle have become more aggressive (less than 60 seconds) as we did a lot of transaction. Also notice that base generation moves in multiple of 16 (explained earlier) 993-1 = 992 = 62*16. I have to do minimum 16 transactions to increase the base generation.
 
Now, let’s look at output of DMV sys.dm_xtp_gc_queue_stats
 
image
We can see that work was enqueued, got dequeued and current_queue_length is zero. This means there is no garbage data in the queue. Let’s check memory usage of table.
image
This means garbage collection has done it’s work and free’ed up memory used by table as unused memory has increased.
 
We would continue to explore more on garbage collection in next blog as well.
 
 
  • Cheers,
  • Balmukund Lakhani
  • Twitter @blakhani
  • Author: SQL Server 2012 AlwaysOnPaperback, Kindle
  • Advertisements

    Posted in A - Z Series, In Memory OLTP, In-Memory OLTP, SQL Server 2014 | Tagged: , , , , , , , , , , , | Leave a Comment »

    A-Z of In-Memory OLTP : Garbage Collection (Part 1)

    Posted by blakhani on February 7, 2014


    We all wants our surroundings to be clean. Sometimes we put efforts to keep it clean and some times we rely on someone else to clean keep it clean. In short, none of us like garbage around us. Garbage is something which has been useful in the past but no more useful. On the same lines, computer world also has term garbage (Wikipedia) and to clean it up, there is a concept called Garbage Collection (Wikipedia). Do you know that in earlier version of SQL there was something similar? Yes, it was “ghost cleanup” back ground process. In SQL Server 2014, garbage collection is “similar” work for data in in-memory tables.

    There are two kind of Garbage Collection which exists in In-Memory OLTP:

    • Garbage Collection of Rows in Memory
    • Garbage Collection of Checkpoint Files

    Now you might understand why I called this post as part 1. Yes, we would start from the clean-up of rows in memory.

    In earlier post, we have discussed about row structure and timestamp. If a row is deleted or updated (which is delete + insert internally) we would have different row versions. It would not be a good idea to delete them physically at the same time. Due to this we would have a number of discarded (garbage) rows in memory. To handle such discarded row versions, garbage collection process is introduced. This process periodically cleans up the rows, which are “garbage” at this point.

    To understand below picture, I would like you to go to hash indexes (part 1 and part 2). I have done color coding to make it easy to read.

    image
    Image#1 Rows 1 and 4 are garbage at this point.

    In above we have end timestamp available for row 1 (Balmukund, Microsoft) and row 3 (PinalD, PluralSight). Notice First and forth row – End Timestamp for Row 1 is Begin Timestamp for Row 4. This would mean that we have fired “Update” and Modified Balmukund to SQLServer at timestamp 300. On the other hand, row 3 was deleted from the system via “Delete” statement at timestamp 170. As indicated in picture, the lowest active transaction time stamp is 350, which means that there is no “active” transaction in the SQL Server whose timestamp is less than 350. Recall that the life of a row is determined by begin and end timestamp. Since all the transactions are above timestamp 350, there is no transaction which would “need” those rows. This would mean that rows 1 and 3 are eligible for the garbage collection. These rows can be removed and their memory can be released back to the system.

    The garbage collection system is designed to be non-blocking, cooperative, efficient, responsive and scalable. (taken from Hekaton whitepaper). Let’s me try to explain reason of those words:

    • Non-Blocking: In some systems, when garbage collection activity starts, there is a huge work done and other “real” workload becomes to halt. This is NOT true with SQL Server In-Memory OLTP. It happens in the back ground all the time.
    • Cooperative: In SQL Server In-Memory OLTP, garbage collection is not the responsibility of dedicated system thread alone. All other users threads who are doing reading/writing of data also do a little piece of over all garbage collection work.
    • Efficient: A regular reader thread would do work if it find a row which is stale/garbage.
    • Responsive: Garbage collection process responds to memory pressure. Recall, lazywriter thread in traditional SQL world.
    • Scalable: Work of Garbage collection if shared. Any transaction which is completed would look at the queue and would de-queue the work.

    All above points would be clear if you read further.

    Let’s assume that there is a select statement fired which is using hash Index created on FName column (for example select * from table where FName = ‘SQLServer’). Since hash value of column is 9, it would scan all rows of the bucket. While doing that it would find a row (Balmukund, Microsoft) which is stale (Refer Image#1). The reader thread would delink that row from the chain and move forward. Note that the row is still NOT deleted from memory but a little work of garbage collection has been done. Here is the after image after select query.

    image
    Image#2 Cooperative work done by User thread doing scan of the bucket.

    You might say that what about red chain above (bucket 9)? That would be cleaned by regular garbage collection thread if user thread is not doing it. This means that if we traverse using Index on company and scanning bucket of hash value 9, it would be delinked by user thread else system thread would do it. Below would be the final state of the system once all garbage is cleaned and memory is released.

    image
    Image#3 Both rows cleaned by garbage collection.

    This means that garbage collection is not a discrete process in SQL Server In-Memory OLTP. It can be done by dedicated system thread and also cooperatively done by user threads. This make the garbage collection more efficient.

    Hope you have learned something new today! In next blog of the series, we would see the DMVs which can be used to see garbage collection in action.

  • Cheers,
  • Balmukund Lakhani
  • Twitter @blakhani
  • Author: SQL Server 2012 AlwaysOnPaperback, Kindle
  • Posted in A - Z Series, In Memory OLTP, In-Memory OLTP, SQL Server 2014 | Tagged: , , , , , , , , , , , | 1 Comment »

    A-Z of In-Memory OLTP : Native Compilation Advisor in SSMS

    Posted by blakhani on February 4, 2014


    In our last part of A-Z Series, we learned about Memory Optimization Advisor (wizard in SSMS) which can be used to migrate disk based tables to in-memory tables. This post is dedicated to another wizard which is for stored procedure. It is important to remember that, unlike Memory Optimization Advisor, this wizard is NOT to convert regular stored procedures into natively complied stored procedure. The idea of the “Native Compilation Advisor” is to scan the code of stored procedure and give advise about what to change in the stored procedure.

    Where is the wizard?

    The “Native Compilation Advisor” wizard can be launched by right clicking the stored procedure which we wish to convert to natively complied. In below screenshot, we can notice the option difference when we right click on regular stored procedure vs. natively complied stored procedure.

    image

    Image#1 – Comparison of right click.

    What is the purpose?

    As explained earlier, this Advisor would NOT convert regular stored procedure to natively complied but provide just the advice and pointers. Let me explain them via good boy and bad boy. We would use same database which we have been using earlier. Here is the blog to get script for database and tables.

    Good Boy!

    Since we already have database SQLSeverHelp_IMO and tables (DiskBasedTable and MyFirstMemporyOptimizedTable), Let’s create regular stored procedure as below.

    -- Create regular stored procedure in database and referencing IMO table
    Use SQLSeverHelp_IMO 
    GO
    CREATE PROCEDURE [dbo].[InsertName] @iID INT, @vFName VARCHAR(20), @vLName VARCHAR(20)
    AS 
    BEGIN  
    
      INSERT INTO dbo.MyFirstMemporyOptimizedTable
      VALUES (@iID, @vFName, @vLName);
    
    END
    GO

    Now, let’s launch the wizard by right clicking on InsertName stored procedure in SQL Server Management Studio (shown in image 1). As soon as we click on Native Compilation Advisor, we get welcome screen.

    image

    Image#2 – Welcome Screen.

    As we can read on the welcome screen – “The Native Compilation Advisor will assist you to discover and evaluate Transact-SQL elements in your stored procedure that are not supported in Native Compilation”. See, I told you! Okay, Let’s hit next.

    Below screen appears and you would notice that Next button is disabled.

    image

    Image#3 – Stored Procedure Validation – no errors

    This is because there is no T-SQL element which is not supported by Natively complied stored procedures. Cancel the wizard to exit.

    Bad Boy!

    Now, I am going to create stored procedure with some (actually many) elements which are not supported by natively complied stored procedure. Please note that this is for CTP2 and things might change in RTM.

    -- Create regular stored procedure in database   
    -- with many elements not supported by natively complied stored procedures  
    USE sqlseverhelp_imo 
    
    go 
    
    IF Object_id('usp_InsertName_Errors') IS NOT NULL 
      DROP PROCEDURE usp_insertname_errors 
    
    go 
    
    CREATE PROCEDURE [dbo].[Usp_insertname_errors] 
    @iID    INT, 
    @vFName VARCHAR(20), 
    @vLName VARCHAR(20) 
    AS 
      BEGIN 
          -- four part naming   
          SELECT * 
          FROM   hk.sqlseverhelp_imo.dbo.myfirstmemporyoptimizedtable 
    
          -- three part naming 
          SELECT * 
          FROM   sqlseverhelp_imo.dbo.myfirstmemporyoptimizedtable 
    
          -- multivalued insert 
          INSERT INTO dbo.myfirstmemporyoptimizedtable 
          VALUES      (@iID, 
                       @vFName, 
                       @vLName), 
                      (@iID + 1, 
                       @vFName, 
                       @vLName) 
    
          -- Subquery 
          SELECT vfname 
          FROM   diskbasedtable 
          WHERE  iid IN (SELECT is_published 
                         FROM   sys.objects) 
    
          -- goto 
          GOTO hi 
    
          -- cursor 
          DECLARE c CURSOR FOR 
            SELECT iid, 
                   vfname 
            FROM   dbo.diskbasedtable 
    
          HI: 
    
          -- cte 
          WITH cte (i, j) 
               AS (SELECT iid, 
                          vfname 
                   FROM   dbo.diskbasedtable) 
          SELECT * 
          FROM   cte 
      END 
    
    go   
    

    if we run wizard for above stored procedure, we would have next button enabled. This is one of those next button which is not going to make your happy. Agree?

    image

    Image#4 – Stored Procedure Validation – with errors

    Fine, so there are problems. Hitting next would tell us more.

    image

    Image#5 – Final Results!

    As highlighted in image # 5 , we also have the option to “generate report” for these unsupported elements. Here is the report which I saved in html format. It gives us code, line number and they are grouped under T-SQL Element (like *, Four-part names etc.)

    Native Compilation Advisor evaluation results for [SQLSeverHelp_IMO].[usp_InsertName_Errors]

    Report Date/Time:2/3/2014 12:16 PM

    Transact-SQL Element Transact-SQL Code Line Number
    * SELECT * FROM hk.sqlseverhelp_imo.dbo.myfirstmemporyoptimizedtable 3
      SELECT * FROM sqlseverhelp_imo.dbo.myfirstmemporyoptimizedtable 7
      WITH cte (i, j) AS (SELECT iid, vfname FROM dbo.diskbasedtable) SELECT * FROM cte 37
    Four-part names hk.sqlseverhelp_imo.dbo.myfirstmemporyoptimizedtable 4
    Three-part names sqlseverhelp_imo.dbo.myfirstmemporyoptimizedtable 8
    Multi-row table-valued constructor VALUES (@iID, @vFName, @vLName), (@iID + 1, @vFName, @vLName) 12
    One-part names diskbasedtable 21
      cte 42
    IN iid IN (SELECT is_published FROM sys.objects) 22
    Subquery (SELECT is_published FROM sys.objects) 22
    GOTO GOTO hi 26
    Cursors CURSOR FOR SELECT iid, vfname FROM dbo.diskbasedtable 30
    WITH clause WITH cte (i, j) AS (SELECT iid, vfname FROM dbo.diskbasedtable) 37
      cte (i, j) AS (SELECT iid, vfname FROM dbo.diskbasedtable) 37

     

    Here is the link which is there in image # 5 http://go.microsoft.com/fwlink?LinkID=311587 talks about common patterns and their workarounds. (link might get change after release of the product)

    Conclusion

    Native Compilation wizard can be used to get advice from SQL Server about the changes which are needed to covert a regular stored procedure to natively compiled stored procedure. There are some common pattern of the problems where workaround can be used by following the link provided by the advisor. Conversation of stored procedure is still manual process and not as clean as Table Advisor (Memory Optimization Advisor). Using both wizards, we can get advice to take action prior to a migration. At the end I would say that this is the only wizard I have seen which doesn’t have finish button.

    Have you installed SQL Server 2012 CTP2 yet? If you haven’t yet, be sure to check out the SQL Server 2014 CTP2 so that you can follow this series and get your hands dirty.

    Stay Tuned!

  • Cheers,
  • Balmukund Lakhani
  • Twitter @blakhani
  • Author: SQL Server 2012 AlwaysOnPaperback, Kindle
  • Posted in A - Z Series, Hekaton Series, In Memory OLTP, In-Memory OLTP, SQL Server 2014 | Tagged: , , , , , , , , , , , , , | 1 Comment »