Help: SQL Server

Sharing my knowlege about SQL Server Troubleshooting Skills

  • Blog Stats

    • 2,037,802 hits
  • Select GETDATE()

    March 2023
    M T W T F S S
     12345
    6789101112
    13141516171819
    20212223242526
    2728293031  

Posts Tagged ‘Series’

A-Z of In-Memory OLTP : Database Startup and Recovery Process

Posted by blakhani on December 31, 2013


“If you are a seasoned driver you have a number of parameters to check before you start your vehicle. It can range from a simple task like if all the doors are locked and there is no indicator, or if the safety seatbelt signs are off or even more critical can be activities like if the fuel tank readings are showing us values that is satisfactory keeping our destination in mind. If reaching a destination using your vehicle can be this strenuous task that we do it with such ease, the process of SQL Server startup also goes through a simple steps of readings that we can start assuming. Adding an In-Memory dimension to an already complex process is always going to be a learning by itself.”  These words are from Vinod (b|t) when I asked him to give me database startup co-relation in real life. Hats off to you Vinod!

Yes, in this blog post. we would be discussing about database startup and recovery for in-memory tables. Database startup is also known as recovery of database. Whenever SQL Server (re)starts, all databases go though start-up subroutine. Below are the situations where database startup or recovery is performed.

  • Restart of SQL Service
  • Restore from a backup.
  • Failover to Mirror or AlwaysOn secondary.
  • ALTER DATABASE SET ONLINE
  • Failover of SQL Server in cluster.

In traditional databases having only disk based tables, the database startup/recovery has three phases: analysis, roll-forward (redo) and rollback (undo). These three phases are well documented in many places. Ref1, Ref2. In this blog, we will talk about startup of databases which have in-memory tables.

Since we are talking about in-memory tables, one of the step during database startup would be to load data for all in-memory tables into memory (RAM). In earlier post, we have talked about DLLs of in-memory table and learned that they are generated during database startup. We also discussed here that indexes information is not logged into transaction log and data files, hence index needs to rebuild during loading of data. So, to summarize, here are the major step of recovery of database having in-memory tables.

  • Generate DLLs of in-memory tables
  • Load data into the memory using checkpoint files (DATA & DELTA files) and transaction log file of database.
  • Create index which are defined on in-memory tables.

If any of above steps fail, the database recovery fails and the database would not come online. Let’s verify few theoretical points via demo.

Generate DLLs of in-memory tables

To demonstrate this, I would go ahead and take our database offline using below command

Use master
go
Alter database SQLSeverHelp_IMO set offline with rollback immediate
go

Once database is offline, let’s look at folder which had DLLs for in-memory tables. On my this SQL Instance there is no other database having in-memory tables.

image 

As we learned earlier, this folder generally has folder for DatabaseID and each of those would have DLLs for the objects in respective database. On my SQL instance, at this point there is no database which is online and having in-memory object, so this folder is empty. Now, let’s bring the database online.

Alter database SQLSeverHelp_IMO set online
go

The database id for SQLServerHelp_IMO is 8 for my SQL Instance, and here is what I see once I bring database online.

image

You may ask, what about the natively compiled stored procedure’s DLL. Well, they would be generated when the stored procedure would be first executed (compiled automatically)

Load data into the memory

As explained earlier, this is also a different step as compare to traditional disk based table. Once database is started, we need to load the data using checkpoint files and transaction log of database. In my previous blog of checkpoint (part 1 and part 2), I have explained about DATA and DELTA file. 

Here is the pictorial representation of data loading process. Image Reference http://msdn.microsoft.com/en-us/library/dn133174(v=sql.120).aspx

image

There are three phases – we can call them as analysis, data load and redo. Note that there is no undo phase because in-memory tables doesn’t log uncommitted data into the transaction log (refer here). During analysis phase, transaction log is read and last checkpoint is located. In data load phase, we first load the DELTA files into memory which contains the pointer for the rows which are deleted. This creates a “delta map” for each delta file. Then data file is loaded into memory and filter is applied. Only those rows are loaded into memory which are not deleted. The loading of data happens in parallel (that’s why there are multiple 1’s and 2’s in image). You can imagine that if we have slow disk sub-system, the time taken by database to come online would be longer.

Create index which are defined on in-memory tables

As we have discussed earlier, only index definition is persisted in physical files. Indexes are only available in-memory and that’s why there is no transaction information is logged in physical files. Once data is loaded and DLLs are generated, next step is to generate index on the data which is loaded into memory. We would talk about indexes in later part of this series.

Once all three steps are completed, database would be made available. If there is any system problem and we are enable to load the data in memory or failed to generated the DLL, SQL database recovery would fail and database would go to “Recovery Pending” state. It’s worth mentioning that if our database has in-memory tables as well as disk based tables and database recovery fails, we would not be able to access disk based tables as well.

In my next blog, I would take up scenarios of recovery failure and show you steps to troubleshoot them. Start tuned!

By the way, this is going to be last post of year 2013 so with you a happy near year and see you next year on another blog post in A-Z of In-Memory OLTP series.

  • Cheers,
  • Balmukund Lakhani
  • Twitter @blakhani
  • Author: SQL Server 2012 AlwaysOnPaperback, Kindle
  • Advertisement

    Posted in A - Z Series, In Memory OLTP, In-Memory OLTP, SQL Server 2014 | Tagged: , , , , , , , , , | 7 Comments »

    A-Z of In-Memory OLTP : SSMS Enhancements for In-Memory

    Posted by blakhani on December 10, 2013


    Microsoft believes that everyone in the world is not a developer there are some DBAs as well. Be it Visual Studio, SQL Server or any other programming paradigm – Microsoft makes it easy for developers and DBA alike. In my previous post, you would have read about new T-SQL constructs added to support In-Memory OLTP objects. This blog post shows the SSMS improvements and dialogs introduced with SQL Server 2014 to work with In-Memory OLTP.

    CREATE DATABASE

    We have learned earlier that we need a new filegroup which would contain memory_optimized_data. This is the key point to understand the flow of actions below..

    • Right Click on database folder in SSMS and Choose “New database”. This is the same place where we create regular database.

    image

    • Open “New Database” interface, go to “Filegroups” tab. Then click on “Add Filegroup” under “Memory Optimized Data” section.

    image

    • I have given name as “SQLSeverHelp_IMO_FG” and then went back to “General” tab.

    image

    • Here is the little explanation of numbered steps.
      1. We need to come back to “General” Tab after creating filegroup in earlier step.
      2. Provide “Database name” SQLSeverHelp_IMO
      3. Click on “Add” button and provide details in new row (third row)
      4. We have given SQLSeverHelp_IMO_dir as logical name
      5. Since it is going to contain In-Memory optimized data, we need to use “FILESTREAM Data” in “File Type”
      6. Once we choose drop down in step 5, “Filegroup” would be “SQLSeverHelp_IMO_FG” created in previous screen.
      7. Provide complete physical “Path” where the files/data needs to be stored for In-Memory tables. C:\IMO_Database\SQLSeverHelp_IMO_dir 

    Follow sequence given in above image and hit OK.

    Do you know that I can remember all command by heart? I am joking! If you want to script this action, use the “Script” button (shown in below image). This would give us the same script (with little more setting) which we use in previous part.

    image

    The “Script” feature is available in almost every user interface of SSMS. Another option is to use predefined template. This is one of under-utilized/less known feature of SSMS. As shown below, go to “View” Menu and click on “Template Explorer” (Ctrl+Alt+T) to access a lot of predefined scripts.

    image

    • Once we click on “template explorer”, we would see “template browser” opened within management studio. Over there, we can see many template and one of them of our interest, as of now, is highlighted in below image.

    image

    • Double click would open the template and we need to use “Ctrl+Shift+M” to get window called “Specify Values for Template Parameters”

    image

    • We can make changes here and hit OK. We can edit path, name and modify the TSQL as per our requirement.

    If you have an existing database then, ALTER DATABASE would be needed. This is because the filegroup for memory_optimized_data might not have been defined. We can go to database properties and add new filegroup and follow same screen with 7 steps.

    CREATE TABLE

    Next step is to create an In-Memory Optimized table. Note that there is no “table designer” UI to create the In-Memory Optimized table. If you have followed earlier steps, template explorer might be open (else press Ctrl+Alt+T)

    Option # 1
    In situations where we have an existing database which is not having memory_optimize_data filrgroup defined, we can use template called “Add MEMORY_OPTIMIZED_DATA Filegroup and Create Memory Optimized Table” (shown below)

    image

    Option # 2
    If we have memory optimized filegroup in the database then we can choose “Create Memory Optimized Table”

    image

    Option # 3
    If we go with conventional way of creating table using SSMS then we can Right Click on “Tables” folder under the database SQLSeverHelp_IMO and choose New > Memory Optimized Table

    image

    All three options would open a query window with the template. (See, I told you earlier – no table designer)

    image

    In this query windows, we need to use Ctrl+Shift+M and replace the parameter as needed. We can add more columns, indexes and make modifications. As the commented section in above template says, we need to have filegroup defined for memory_optimized_data

    image

    In a nut-shell, in the absence of UI, you need to have T-SQL syntax knowledge.

    CREATE PROCEDURE

    Same as CREATE TABLE, there is no special interface available for natively compiled procedure. We can use either Right Click technique (#1) or template browser (#2) to reach to template.

    Option # 1

    image

    Option # 2

    image

    Both of the options above are going to open template and then we can write the code there. Remember the shortcut Ctrl+Shift+M for filling template parameter.

    image

    Till now we have learned how to create objects. In next post, we would see what happens differently (than regular objects) when in-memory tables are in-memory stored procedures are created. Stay tuned!

    Missed reading earlier posts? Here is the quick reference:

    Stay Tuned!

    Cheers,
    Balmukund Lakhani
    Twitter @blakhani
    Author: SQL Server 2012 AlwaysOnPaperback, Kindle

    Posted in A - Z Series, Hekaton Series, In Memory OLTP, In-Memory OLTP, Introduction, screen shot, Screenshot, SQL Server 2014, SQL Server Management Studio, SSMS | Tagged: , , , , , , , , | 4 Comments »

    A-Z of In-Memory OLTP : My name is Hekaton

    Posted by blakhani on December 3, 2013


    During my last session about In-Memory OLTP (Codename Hekaton) feature of SQL Server 2014 in SQLBangalore User Group Meeting, I discovered a lot of interest in the community about this. This has given me idea to share my knowledge about the feature via a blog series. This is first blog post of the series “A-Z of In-Memory OLTP” and hopefully it would help you to learn everything about In-Memory OLTP feature. My friend and regular blogger, Pinal (b|t) has written about it here. People have been using Hekaton and In-Memory OLTP interchangeably and I might not be an exception in my future post.

    Hekaton is the code name used by SQL Server Product team for this feature. Once the product is released in market, feature gets a branding by marketing team. Hekaton is a Greek term for "factor of 100." – the aspirational goal of the project was to see 100 times performance acceleration levels. Another thing which you observe in my future posts is that product team have been using term XTP for the DMVs and other place which stands for eXtreme Transaction Processing. This feature is already announced to get shipped in the next major release of  SQL Server, which is SQL Server 2014.  While writing this blog, I have used CTP2 version which is free to download from here.

    There have been myths about the new feature which are as follows:

    1. SQL Server In-Memory OLTP is a recent response to competitor’s offerings.
    2. In-Memory OLTP is like DBCC PINTABLE.
    3. In-Memory Databases are new separate products.
    4. You can use In-Memory OLTP in an existing SQL Server app with NO changes whatsoever.
    5. Since tables are in memory, the data is not Durable or Highly Available – I will lose it after server crash.

    I can guarantee that ALL of the above are myths, which we would uncover in later part of this series. Here is the quick stab on above myths.

    1. Microsoft Research (MSR) team has been working on this since past couple of years to get this feature baked-in to the product . This was not an overnight decision to write code and make this feature. Research on these lines have been going on in SQL Server arena (Here, here and here). These links are for SQL Server 2012 days.
    2. It’s all together new feature which has completely new index and table structures. PINTABLE was only keeping (pinning) Table Pages in memory which was 8 KB in size and still has latch and locks taken for any modification. In-Memory OLTP is a latch (and lock) free engine.
    3. In-Memory database is not a separate product. This is part of SQL Server 2014 product itself (limited to Enterprise Edition x64 only – “Enterprise Evaluation” and “Developer” included). we need to install “Database Engine Services” component, during installation, to get this feature.
    4. Don’t trust someone if they say zero modification is required. At least, table schema needs modification to move disk based table to in-memory table. We would learn about syntax and internals later in the series.
    5. In-memory table does not mean they reside only in memory and hence non-durable. In-memory OLTP provides full durability for memory-optimized tables. When a transaction that has made changes to memory-optimized table commits, SQL Server, just like it does for disk-based tables, guarantees that the changes are permanent and can survive the database restart provided the underlying storage is available. We can have non-durable tables as well but by default, in-memory tables are durable and data would be available after server crash as well. (How? Well, you would learn more in further blogs on this series)

    Why In-Memory

    Looking at the market trends of tumbling cost of RAM (USD/MB) and performance implication of reading data from memory vs disk, its evident that people would love to keep the data in memory. With this evolution in hardware industry, softwares have to be evolved and modified so that they can take advantage and scale as much as possible. On the other hand, businesses also don’t want to compromise the durability of data – restart would clear RAM but data should be back in the same state as it was before failure (See myth # 5). To meet hardware trends and durability requirements, SQL Server 2014 has introduced In-Memory OLTP which would solve them in unique manner.

    Microsoft already has some in-memory offerings namely xVelocity analytics engine and xVelocity columnstore index targeted analytical workloads and columnar storage, respectively. In words of Paul Larson, Principal Researcher with the Microsoft’s Database Group mentioned here -“In traditional models, the assumption is that data lives on disk and is stored on disk pages, This creates a lot of overhead when you try to access records. When data lives totally in memory, we can use much, much simpler data structures. Hekaton’s index data structures and storage structures are optimized on the basis that when a table is declared memory-optimized, all of its records live in memory.”

    This is right time to remind you that In-memory is not applicable for complete database. So, we can say “hybrid-database”. You have option to choose tables which are hot and having contention and move them in-memory. This means that you can have a 1 TB database and have one hot-spot table of 100 GB in memory. Rest tables could be conventional in-disk storage.

    More reading

    Read this detailed technical paper about multi-version concurrency control (MVCC), written by product team and folks from University of Wisconsin submitted in Very Large Data Base Endowment Inc. (www.VLDB.org). You can read here about fantastic work done by Microsoft Research team.

    If you haven’t yet, be sure to check out the SQL Server 2014 CTP2 so that you can follow this series and get your hands dirty.

    Stay Tuned!

    Cheers,
    Balmukund Lakhani
    Twitter @blakhani
    Author: SQL Server 2012 AlwaysOnPaperback, Kindle

    Posted in A - Z Series, Hekaton Series, In Memory OLTP, In-Memory OLTP, SQL Server, SQL Server 2014 | Tagged: , , , , , , | 15 Comments »