Database Reference
In-Depth Information
Chapter 23
Memory-Optimized OLTP Tables
and Procedures
One of the principal needs for online transaction processing (OLTP) systems is to get as much speed as possible out
of the system. With this in mind, in SQL Server 2014, Microsoft introduced a new set of functionality focused around
making OLTP systems as fast as possible. These are the memory-optimized technologies of in-memory tables and
natively compiled stored procedures. This set of enterprise-only features is meant for high-end, transaction-intensive,
OLTP-focused systems. The memory-optimized technologies are another tool in the toolbox of query tuning, but they
are a highly specialized tool, applicable only to certain applications. Be cautious in adopting this new technology. That
said, on the right system with the right kind of memory, I am talking about blazing-fast speed.
In this chapter, I cover the following topics:
The basics of how in-memory tables work
Improving performance by natively compiling stored procedures
The benefits and drawbacks of natively compiled procedures and in-memory OLTP tables
Recommendations for when to use in-memory OLTP tables
In-Memory OLTP Fundamentals
At the core of it all, you can tune your queries to run incredibly fast. But, no matter how fast you make them run,
to a degree you're limited by some of the architectural issues within modern computers. Typically, the number-one
bottleneck is the storage system. Whether you're still looking at spinning platters or you've moved on to some type of
SSD or similar technology, the disks are still the slowest aspect of the system. This means for reads or writes, you have
to wait. But memory is fast, and with the new 64-bit operating systems, it can be plentiful. So, if you have tables that
you can move completely into memory, you can radically improve the speed. That's part of what in-memory OLTP
tables are all about: moving the data access, both reads and writes, into memory and off the disk.
But Microsoft did more than that. It recognized that while the disk was slow, another aspect of the system
slowing things down was how queries were compiled, stored, and accessed, as well as how the data was accessed and
managed through the transaction system. So, Microsoft made a series of changes there as well. The primary one was
changing from a pessimistic approach to transactions. The existing product forces all transactions to get written to
the transaction log before allowing the data changes to get flushed to disk. This creates a bottleneck in the processing
of transactions. So, instead of pessimism about whether a transaction will successfully complete, Microsoft took an
optimistic approach that most of the time, transactions will complete. Further, instead of having a blocking situation
where one transaction has to finish updating data before the next can access it or update it, Microsoft versioned the
data. It has now eliminated a major point of contention within the system and radically reduced blocking, and all this
is in memory, so it's even faster.
 
Search WWH ::




Custom Search