Help: SQL Server

Sharing my knowlege about SQL Server Troubleshooting Skills

  • Blog Stats

    • 2,039,818 hits
  • Select GETDATE()

    May 2014
    M T W T F S S
     1234
    567891011
    12131415161718
    19202122232425
    262728293031  

Archive for May, 2014

SQL 2014 Learning Series # 10 – New Feature – Resource Governor to control I/O (Part 2)

Posted by blakhani on May 20, 2014


Reading theory is important but doing practical in real life helps remembering a concept for longer time. In last post about resource governor, we have learned the basics. In this blog, we would see resource governor in action. If you have SQL Server 2014 installed then you can easily follow the steps and copy/paste the scripts. As a demo, we would create two resource groups and pools which different settings.

Step 1: Create Resource Pools

As a first step, we would create resource pool for limiting IO done by a specific pool. ThousandIOPs sets the maximum I/Os per second (IOPS) to 1000 and TenIOPs sets maximum IOPS to 10. This is done via parameter MAX_IOPS_PER_VOLUME.

CREATE RESOURCE POOL [ThousandIOPs] WITH
(
    MAX_IOPS_PER_VOLUME = 1000
)
GO
CREATE RESOURCE POOL [TenIOPs]
WITH ( MAX_IOPS_PER_VOLUME = 10 ) GO

Step 2: Create Workload Groups tied to resource pool

In below code I created Resource Groups named “FastIO” and “SlowIO” and associated them with my Resource Pool named “ThousandIOPs” and “TenIOPs” respectively.

-- Create a  Workload Group for FastIO 
CREATE WORKLOAD GROUP [FastIO]
USING [ThousandIOPs]
GO
-- Create a new Workload Group for throttled IO
CREATE WORKLOAD GROUP [SlowIO]
USING [TenIOPs]
GO

 

Step 3: Define a classifier function to bind request to workgroup

In Step 3, I need to define a classifier function that will identify the resource group to which request should be categorized. You can notice that I have used SUSER_NAME() function to get resource group name as output parameter. In other words, if use SlowIO logs in, he would be bound to group SlowIO and hence tied to TenIOPs.

-- Create a Classifier Function for Resource Governor
Use Master
GO
CREATE FUNCTION dbo.MyClassifierFunction()
RETURNS SYSNAME WITH SCHEMABINDING
AS
BEGIN
    DECLARE @GroupName SYSNAME
    
    IF SUSER_NAME() = 'SlowIO'
    BEGIN
        SET @GroupName = 'SlowIO'
        RETURN @GroupName;
    END
    ELSE IF SUSER_NAME() = 'FastIO'
    BEGIN
        SET @GroupName = 'FastIO'
        RETURN @GroupName;
    END
    ELSE 
    BEGIN
        SET @GroupName = 'Default'
    END    
    RETURN @GroupName;
END
GO

Step 4: Register classifier_function to resource governor

So far we have setup Resource Governor to control I/O. We need to do one last thing.  We need to alter Resource Governor to use classifier function, defined in step3, and then reconfigure it to use all my new settings.  This can be done by running the following code:

-- Register the Classifier Function within Resource Governor
ALTER RESOURCE GOVERNOR WITH
(
    CLASSIFIER_FUNCTION = dbo.MyClassifierFunction
)
GO
-- Reconfigure resource governor
ALTER RESOURCE GOVERNOR RECONFIGURE
GO

Testing IO Resource Governor

To test resource governor, we will create two logins SlowIO and FastIO and they would be falling into different pools TenIOPs and ThousandIOPs respectively. I have given sysadmin rights for simplicity of demo.

Create login SlowIO with password = 'pass@word1'
GO
Create login FastIO with password = 'pass@word1'
GO
ALTER SERVER ROLE [sysadmin] ADD MEMBER [SlowIO]
GO
ALTER SERVER ROLE [sysadmin] ADD MEMBER [FastIO]
GO

 

Create database, table and populate data using below script.

CREATE DATABASE rgiotest 
go 
USE rgiotest; 
go 
CREATE TABLE iotest 
  ( 
     id             INT IDENTITY  CONSTRAINT [PK_IOTest_ID] PRIMARY KEY CLUSTERED (id ASC), 
     iintegercolumn INT, 
     ccharcolumn    CHAR(50) 
  ); 
go 
SET nocount ON; 
INSERT INTO iotest 
            (iintegercolumn, 
             ccharcolumn) 
SELECT TOP (20000) Row_number() 
                     OVER( 
                       ORDER BY a.object_id), 
                   'X'
FROM   sys.all_columns AS a 
       CROSS JOIN sys.all_columns AS b
go 100 -- this would run above insert 100 times

Above script took around 30 seconds on my laptop. Now, I would run the script to do a lots of read using two different logins.

CHECKPOINT
DBCC DROPCLEANBUFFERS

USE RGIOTEST;
GO
IF OBJECT_ID ('tempdb..#IOTest') is not null
   DROP TABLE #IOTest;
GO
SELECT top 2000000 * INTO #IOTest FROM IOTest;
GO

I have used DROPCLEANBUFFERS in above script because I don’t want SQL to read data from memory but do a physical IO.

Logged in using SlowIO account

image

Script has taken 37 seconds and below is the perfmon counters which I have spoken in last blog.

image

The value is throttled to around 10 which is what we have capped (MAX_IOPS_PER_VOLUME = 10)

Logged into using FastIO account

image

Here the same query executed in 4 seconds.

image

The value is much under control. We have capped to 1000 and we are much under limits.

I must say that the demo shown here is extremely simple, but you can must have got an understanding about you how to setup and use the Resource Governor to throttle IO in SQL Server 2014.

Hope you are learning something new about SQL Server 2014 from this series.

  • Cheers,
  • Balmukund Lakhani
  • Twitter @blakhani
  • Author: SQL Server 2012 AlwaysOnPaperback, Kindle
  • Advertisement

    Posted in SQL 2014 Learning Series, SQL Server 2014 | Tagged: , , , | 3 Comments »

    SQL 2014 Learning Series # 9 – New Feature – Resource Governor to control I/O (Part 1)

    Posted by blakhani on May 15, 2014


    Have you ever come across a situation where you want to re-index complete database and other queries are facing slowness because of IO bottleneck? In earlier version of SQL Server, resource governor was available for CPU and memory. In SQL Server 2014, resource governor for IO was also introduced. This would enable DBA to manage and control the allocation of resources. To read more about earlier version, you can refer books online. As per documentation, below are the goals which were thought while designing RG:

    • Minimize impact run-away queries:- Run-away queries are the queries which would consume resources for long time. Think of a reporting query which consumes 100% as soon as it runs. If such query is executed, other queries would suffer from resource bottleneck.
    • Improve predictability of query executions:- During maintenance tasks (rebuild index, update stats) or ETL, there might be a heavy use of resource by those processes. This would cause unpredictable time of query executions. This is generally not a desirable situation.

    Let’s first understand the basic functionality and various terms used in resource governor. Below picture shows various components of resource governor.

    image

    As soon as request comes from any source it is assigned to a particular resource group via user defined function known as “classifier function”. Classifier function can use various connection attributes and provide the bucket in which request should fall into. These buckets are called Resource Group. Those request which are not classified, would go to default group (orange color). Default group would use remaining resources. We should also remember that there is a internal pool (green color) which can’t be controlled via resource governor. The maximum number of pools which can be configured in SQL Server is 64.

    Someone may still ask, why I/O needs to be governed? Think of a situation when you are database services provider. There are many customers who are hosting database on your server. What if one of your naughty customer runs read/write heavy workload? Someone else would be impacted. Right?

    Let’s see what’s new in resource governor. Highlighted is the additional which is available in SQL 2014. This means that now we have ability to set/get/alter minimum and maximum I/O operations per second (IOPS) per volume to resource pool.

    CREATE RESOURCE POOL pool_name
    [ WITH
        (
            [ MIN_CPU_PERCENT = value ]
            [ [ , ] MAX_CPU_PERCENT = value ] 
            [ [ , ] CAP_CPU_PERCENT = value ] 
            [ [ , ] AFFINITY {SCHEDULER =
                      AUTO | ( <scheduler_range_spec> ) 
                    | NUMANODE = ( <NUMA_node_range_spec> )} ] 
            [ [ , ] MIN_MEMORY_PERCENT = value ]
            [ [ , ] MAX_MEMORY_PERCENT = value ]
            [ [ , ] MIN_IOPS_PER_VOLUME = value ]
            [ [ , ] MAX_IOPS_PER_VOLUME = value ]
        ) 
    ]
    [;]

     

    There are various changes introduced in DMVs to monitor resource governor for IO and new DMVs are also added.

    Below are existing DMVs having new columns about

    • sys.resource_governor_configuration
      • max_outstanding_io_per_volume
    • sys.dm_resource_governor_resource_pools:
      • min_iops_per_volume
      • max_iops_per_volume
      • read_io_queued_total
      • read_io_issued_total
      • read_io_completed_total
      • read_io_throttled_total
      • read_bytes_total
      • read_io_stall_total_ms
      • read_io_stall_queued_ms
      • write_io_queued_total
      • write_io_issued_total
      • write_io_completed_total
      • write_io_throttled_total
      • write_bytes_total
      • write_io_stall_total_ms
      • write_io_stall_queued_ms
      • io_issue_violations_total
      • io_issue_delay_total_ms

    New DMV:

    • sys.dm_resource_governor_resource_pool_volumes
      • pool_id
      • volume_name
      • read_io_queued_total
      • read_io_issued_total
      • read_io_completed_total
      • read_io_throttled_total
      • read_bytes_total
      • read_io_stall_total_ms
      • read_io_stall_queued_ms
      • write_io_queued_total
      • write_io_issued_total
      • write_io_completed_total
      • write_io_throttled_total
      • write_bytes_total
      • write_io_stall_total_ms
      • write_io_stall_queued_ms
      • io_issue_violations_total
      • io_issue_delay_total_ms

    Other than DMVs there are XEvents and Performance counters added to monitor and troubleshoot resource governor related to IO. Since my instance name is SQL2014, the counter name is MSSQL$SQL2014:Resource Pool Stats. I have highlighted 8 new counters (4 for read and 4 for write) which are newly added in SQL 2014. Note that these counters are “per pool” basis. Here is the query to get list of counters.

    Select distinct object_name, 
    counter_name from sys.dm_os_performance_counters 
    where object_name like '%Pool%'

    MSSQL$SQL2014:Resource Pool Stats  Active memory grant amount (KB) 
    MSSQL$SQL2014:Resource Pool Stats  Active memory grants count
    MSSQL$SQL2014:Resource Pool Stats  Avg Disk Read IO (ms) 
    MSSQL$SQL2014:Resource Pool Stats  Avg Disk Write IO (ms) 
    MSSQL$SQL2014:Resource Pool Stats  Cache memory target (KB) 
    MSSQL$SQL2014:Resource Pool Stats  Compile memory target (KB)
    MSSQL$SQL2014:Resource Pool Stats  CPU control effect %
    MSSQL$SQL2014:Resource Pool Stats  CPU usage %
    MSSQL$SQL2014:Resource Pool Stats  CPU usage % base 
    MSSQL$SQL2014:Resource Pool Stats  CPU usage target % 
    MSSQL$SQL2014:Resource Pool Stats  Disk Read Bytes/sec 
    MSSQL$SQL2014:Resource Pool Stats  Disk Read IO Throttled/sec
    MSSQL$SQL2014:Resource Pool Stats  Disk Read IO/sec 
    MSSQL$SQL2014:Resource Pool Stats  Disk Write Bytes/sec
    MSSQL$SQL2014:Resource Pool Stats  Disk Write IO Throttled/sec 
    MSSQL$SQL2014:Resource Pool Stats  Disk Write IO/sec
    MSSQL$SQL2014:Resource Pool Stats  Max memory (KB) 
    MSSQL$SQL2014:Resource Pool Stats  Memory grant timeouts/sec 
    MSSQL$SQL2014:Resource Pool Stats  Memory grants/sec
    MSSQL$SQL2014:Resource Pool Stats  Pending memory grants count 
    MSSQL$SQL2014:Resource Pool Stats  Query exec memory target (KB)
    MSSQL$SQL2014:Resource Pool Stats  Target memory (KB) 
    MSSQL$SQL2014:Resource Pool Stats  Used memory (KB) 

    In Next part of this post, we would see IO resource governor in action.

  • Cheers,
  • Balmukund Lakhani
  • Twitter
  • @blakhani

  • Author: SQL Server 2012 AlwaysOnPaperback, Kindle
  • Posted in SQL 2014 Learning Series, SQL Server 2014 | Tagged: , , , | Leave a Comment »

    SQL 2014 Learning Series # 8 – New Feature – Delayed Durability (Part 2)

    Posted by blakhani on May 13, 2014


    In part 1 of the post about delayed durability, we have learned the basics of delayed durability. In this part, lets see it in action. I must point out that this feature is also called “Lazy Commit”. Hope you remember that we can set durability as delayed at two levels, database level and transaction level (commit statement). The idea of this demo is to show difference in writing of transaction log record. I have used free tool called process monitor (free download here) which is very useful to profile file/registry/network related activity by processes.

    Here is the script to create database and table.

    USE master
    go
    IF DB_ID('MyDemoDB') IS NOT NULL
    begin
    ALTER DATABASE [MyDemoDB] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE [MyDemoDB]
    end
    GO
    -- Create New Database, set recovery to full and take full backup
    CREATE DATABASE [MyDemoDB]
    GO
    ALTER DATABASE [MyDemoDB] SET RECOVERY FULL WITH NO_WAIT
    GO
    ALTER DATABASE [MyDemoDB] MODIFY FILE ( NAME = N'MyDemoDB', SIZE = 500MB )
    GO
    ALTER DATABASE [MyDemoDB] MODIFY FILE ( NAME = N'MyDemoDB_log', SIZE = 500MB )
    GO
    BACKUP DATABASE [MyDemoDB] to DISK = 'MyDemoDB.bak'
    GO
    
    -- create a table. 
    Use [MyDemoDB]
    GO
    create table DemoTable 
    (col1 int identity primary key clustered, 
    col2 varchar(100)) GO

    We can use the combination specified below to do specific delayed durable or fully durable transaction

    For demo purpose, I have selected database Setting = Allowed and transaction setting delayed_durability = OFF to achieve fully durable transaction. Below is the script to change and view the settings for database.

    USE [master]
    GO
    ALTER DATABASE [MyDemoDB] SET DELAYED_DURABILITY = ALLOWED WITH NO_WAIT
    GO
    -- Verify setting
    select [name], [database_id], [delayed_durability], [delayed_durability_desc]
    from sys.databases where name = 'MyDemoDB'
    GO

    Here is the stored procedure where I am setting transaction level delayed durability to OFF = Fully Durable.

    Use MyDemoDB
    go
    Create Procedure Insert_DemoTable
    as
    begin
    SET NOCOUNT ON
     
    DECLARE @counter AS INT = 0
    DECLARE @start datetime
    Select @start = getdate()
     
        WHILE (@counter < 100)
            BEGIN
                begin tran
                 INSERT INTO DemoTable VALUES( @counter)
                 SET @counter = @counter + 1
                 commit with (delayed_durability = OFF)
             END
    Select datediff(second, @start, getdate() )
    end
    go

    Since our database setting is set to allowed, we will create another procedure to set delayed durability as ON on transaction level.

    Use MyDemoDB
    go
    Create Procedure Insert_DemoTable_Delayed
    as
    begin
    SET NOCOUNT ON
     
    DECLARE @counter AS INT = 0
    DECLARE @start datetime
    Select @start = getdate()
     
        WHILE (@counter < 1000)
            BEGIN
                begin tran
                 INSERT INTO DemoTable VALUES( @counter)
                 SET @counter = @counter + 1
                 commit with (delayed_durability = ON)
             END
    Select datediff(second, @start, getdate() )
    end
    go
    
    

    Now we are ready to perform some test to see effect on log record writing in fully durable and delayed durable transaction. I have started ProcMon and added filter (Ctrl+L) for “Path” as “E:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\MyDemoDB_log.ldf” which is the transaction log file path for MyDemoDB database. You may want to change as per your environment. Here are the two pictures which can explain a lot by themselves.

    image

    Image 1: Parallel capture – SSMS and ProcMon showing the activity done by sqlservr.exe during fully durable transaction

    image

    Image 2: Parallel capture – SSMS and ProcMon showing the activity done by sqlservr.exe during delayed durable transaction

    Here are the important points to note:

    • In case of fully durable transaction, we are seeing 100 WriteFile calls to ldf file. This is due to the fact that in stored procedure, we are doing 100 transactions.
    • When delayed durable transaction is performed, SQL Server didn’t write synchronously to transaction log file for each transaction.
    • In delayed durable test, 100 transactions were clubbed together and only one writeFile is issued asynchronously.
    • Size of write for each transaction was 4096 byte and there were 100 calls in fully durable transaction but in delayed durable, the size of write is 36864 and just one call.
    • If we use undocumented command fn_dblog, we can see that the log record structure is same.

    To test further, I have modified the procedure and increased the counter to 100000 and here is the time taken by them is shown below.

    image

    The test which I have done may not be pattern of your work load but this is just to show you performance gain for certain kind of workloads. This is available with ALL editions of SQL Server 2014 and NOT restricted to In-Memory OLTP (few people have asked me this!)

    SQL Server 2014 has in-built extended stored procedure called sp_flush_log which can be used to forcefully flush the log buffers to transaction log. You can read more here

    I have heard people explaining this feature as SQL not using Write Ahead Logging. That is totally incorrect. WAL is not about synchronous writing of transaction log, it’s about Writing in log ahead of data change. If WAL is broken, crash recovery of database would be a problem. I hope that this blog gives you an understanding about this new feature called Lazy Commit or Delayed Durability.

  • Cheers
  • Balmukund Lakhani
  • Twitter @blakhani
  • Author: SQL Server 2012 AlwaysOnPaperback, Kindle
  • Posted in SQL 2014 Learning Series, SQL Server 2014 | Tagged: , , | 1 Comment »