Database Reference
In-Depth Information
Microsoft then took all this another step further. Instead of the pessimistic approach to memory latches that
prevent more than one process from accessing a page to write to it, Microsoft extended the optimistic approach
to memory management as well. Now, with versioning, in-memory tables work off a model that is “eventually”
consistent with a conflict resolution process that will roll back a transaction but never block one transaction by
another. This has the potential to lead to some data loss, but it makes everything within the data access layer fast.
Finally, as you've seen throughout the rest of the topic, a major part of query tuning is figuring out how to work
with the query optimizer to get a good execution plan and then have that plan reused multiple times. This can also be
an intensive and slow process. SQL Server 2014 introduces the concept of natively compiled stored procedures. These
are literally T-SQL code compiled down to DLLs and made part of the SQL Server OS. This compile process is costly
and shouldn't be used for just any old query. The principal idea is to spend time and effort compiling a procedure to
native code and then get to use that procedure millions of times at a radically improved speed.
All this technology comes together to create new functionality that you can use by itself or in combination with
existing table structures and standard T-SQL. In fact, you can treat in-memory tables much the same way as you treat
normal SQL Server tables and still realize some performance improvements. But, you can't just do this anywhere.
There are some fairly specific requirements for taking advantage of in-memory OLTP tables and procedures.
System Requirements
The most important system requirement for the in-memory technology is that you must be running the Enterprise
version of SQL Server 2014 in order to get access to it (although it works within the Developer edition too). You must
meet a few other standard requirements before you can even consider whether memory-optimized tables are a
possibility.
A modern 64-bit processor
Twice the amount of free disk storage for the data you intend to put into memory
Lots of memory
Obviously, for most systems, the key is lots of memory. You need to have enough memory for the operating
system and SQL Server to function normally. Then you still need to have memory for all the non-memory-optimized
requirements of your system including the data cache. Finally, you're going to add, on top of all that, memory for your
memory-optimized tables. If you're not looking at a fairly large system, with a minimum of 64GB memory, I don't
suggest even considering this as an option. Smaller systems are just not going to provide enough storage in memory to
make this worth the time, effort, and added licensing costs.
Basic Setup
In addition to the hardware requirements, you have to do additional work on your database to enable in-memory
tables. I'll start with a new database to illustrate.
CREATE DATABASE InMemoryTest ON PRIMARY
(NAME = N'InMemoryTest_Data',
FILENAME = N'D:\Data\InMemoryTest_Data.mdf',
SIZE = 5GB)
LOG ON
(NAME = N'InMemoryTest_Log',
FILENAME = N'L:\Log\InMemoryTest_Log.ldf');
For the in-memory tables to maintain durability, they must write to disk as well as to memory since memory goes
away with the power. Durability (part of the ACID properties of a relational dataset) means that once a transaction
commits, it stays committed. You can have a durable in-memory table or a nondurable table. With a nondurable table,
 
Search WWH ::




Custom Search