A-Z of In-Memory OLTP : My name is Hekaton
Posted by blakhani on December 3, 2013
During my last session about In-Memory OLTP (Codename Hekaton) feature of SQL Server 2014 in SQLBangalore User Group Meeting, I discovered a lot of interest in the community about this. This has given me idea to share my knowledge about the feature via a blog series. This is first blog post of the series “A-Z of In-Memory OLTP” and hopefully it would help you to learn everything about In-Memory OLTP feature. My friend and regular blogger, Pinal (b|t) has written about it here. People have been using Hekaton and In-Memory OLTP interchangeably and I might not be an exception in my future post.
Hekaton is the code name used by SQL Server Product team for this feature. Once the product is released in market, feature gets a branding by marketing team. Hekaton is a Greek term for "factor of 100." – the aspirational goal of the project was to see 100 times performance acceleration levels. Another thing which you observe in my future posts is that product team have been using term XTP for the DMVs and other place which stands for eXtreme Transaction Processing. This feature is already announced to get shipped in the next major release of SQL Server, which is SQL Server 2014. While writing this blog, I have used CTP2 version which is free to download from here.
There have been myths about the new feature which are as follows:
- SQL Server In-Memory OLTP is a recent response to competitor’s offerings.
- In-Memory OLTP is like DBCC PINTABLE.
- In-Memory Databases are new separate products.
- You can use In-Memory OLTP in an existing SQL Server app with NO changes whatsoever.
- Since tables are in memory, the data is not Durable or Highly Available – I will lose it after server crash.
I can guarantee that ALL of the above are myths, which we would uncover in later part of this series. Here is the quick stab on above myths.
- Microsoft Research (MSR) team has been working on this since past couple of years to get this feature baked-in to the product . This was not an overnight decision to write code and make this feature. Research on these lines have been going on in SQL Server arena (Here, here and here). These links are for SQL Server 2012 days.
- It’s all together new feature which has completely new index and table structures. PINTABLE was only keeping (pinning) Table Pages in memory which was 8 KB in size and still has latch and locks taken for any modification. In-Memory OLTP is a latch (and lock) free engine.
- In-Memory database is not a separate product. This is part of SQL Server 2014 product itself (limited to Enterprise Edition x64 only – “Enterprise Evaluation” and “Developer” included). we need to install “Database Engine Services” component, during installation, to get this feature.
- Don’t trust someone if they say zero modification is required. At least, table schema needs modification to move disk based table to in-memory table. We would learn about syntax and internals later in the series.
- In-memory table does not mean they reside only in memory and hence non-durable. In-memory OLTP provides full durability for memory-optimized tables. When a transaction that has made changes to memory-optimized table commits, SQL Server, just like it does for disk-based tables, guarantees that the changes are permanent and can survive the database restart provided the underlying storage is available. We can have non-durable tables as well but by default, in-memory tables are durable and data would be available after server crash as well. (How? Well, you would learn more in further blogs on this series)
Why In-Memory
Looking at the market trends of tumbling cost of RAM (USD/MB) and performance implication of reading data from memory vs disk, its evident that people would love to keep the data in memory. With this evolution in hardware industry, softwares have to be evolved and modified so that they can take advantage and scale as much as possible. On the other hand, businesses also don’t want to compromise the durability of data – restart would clear RAM but data should be back in the same state as it was before failure (See myth # 5). To meet hardware trends and durability requirements, SQL Server 2014 has introduced In-Memory OLTP which would solve them in unique manner.
Microsoft already has some in-memory offerings namely xVelocity analytics engine and xVelocity columnstore index targeted analytical workloads and columnar storage, respectively. In words of Paul Larson, Principal Researcher with the Microsoft’s Database Group mentioned here -“In traditional models, the assumption is that data lives on disk and is stored on disk pages, This creates a lot of overhead when you try to access records. When data lives totally in memory, we can use much, much simpler data structures. Hekaton’s index data structures and storage structures are optimized on the basis that when a table is declared memory-optimized, all of its records live in memory.”
This is right time to remind you that In-memory is not applicable for complete database. So, we can say “hybrid-database”. You have option to choose tables which are hot and having contention and move them in-memory. This means that you can have a 1 TB database and have one hot-spot table of 100 GB in memory. Rest tables could be conventional in-disk storage.
More reading
Read this detailed technical paper about multi-version concurrency control (MVCC), written by product team and folks from University of Wisconsin submitted in Very Large Data Base Endowment Inc. (www.VLDB.org). You can read here about fantastic work done by Microsoft Research team.
If you haven’t yet, be sure to check out the SQL Server 2014 CTP2 so that you can follow this series and get your hands dirty.
Stay Tuned!
Cheers,
Balmukund Lakhani
Twitter @blakhani
Author: SQL Server 2012 AlwaysOn – Paperback, Kindle
Sandip Pani said
Thank you Sir. Very well explained. Waiting for the next post.
blakhani said
Thanks @Sandip
Abhay said
Quite itching to know the indepth of how the durability will be maintained. We do have Distributed cache(Appfabric 1.1) where we try and keep the read-only data in the cache but we cannot keep the Read-write data because of the fear of loosing the cache. It will need a lot of R&D and investment before we could do that. So it wil be interesting to see how in-memory tables maintains the durability..
blakhani said
@Abhay – Sure! Stay tuned and I am going to have a dedicated post on that.
Sourav said
Fantastic.
blakhani said
thanks.
Thirumurugan said
I have attend the SQL Server Bangalore event of last month. Awesome Sir!!! You have taken a Session to In-Line-Memory OLTP from SQL Server 2014(Hekaton).
It has very useful to understand In-depth of Memory Optimized. I am waiting for your next post.
blakhani said
thanks Thiru
A-Z of In-Memory OLTP : TSQL Constructs for In-Memory « Help: SQL Server said
[…] « A-Z of In-Memory OLTP : My name is Hekaton […]
A-Z of In-Memory OLTP : SSMS Enhancements for In-Memory « Help: SQL Server said
[…] Part 1 A-Z of In-Memory OLTP : My name is Hekaton […]
A-Z of In-Memory OLTP : Behind the scenes « Help: SQL Server said
[…] SQL ServerMSSQL12.MSSQLSERVERMSSQLDATA). Do you remember what XTP stands for? Go back and read first […]
A-Z of In-Memory OLTP : Extended Events (XEvents) « Help: SQL Server said
[…] To get the list of XEvents which are related to In-Memory OLTP, we can either use query or Interface. There are three packages available for XTP (Hope you have read my initial blog of series which explains XTP, if not, read here) […]
Sourav said
Fantastic Port. Many thanks. 🙂
A-Z of In-Memory OLTP: Online gaming casinos can handle more data per second « Help: SQL Server said
[…] I explained in first part of series SQL Server that uses a main memory optimization and no-locking/no-latching concurrency control in […]
manish said
Where are you now boss these days !