Database Reference
In-Depth Information
adding more memory to the server can be the easiest and cheapest option in the long term. It is often easier and
cheaper to upgrade hardware than to invest dozens or even hundreds of hours redesigning and refactoring the code and
database schema.
Tip
Estimating the amount of memory required for memory-optimized tables is not a trivial task. You should
estimate the memory requirements for several different components:
Row data size consists of a 24-byte header, index pointers array, which is 8-byte per index,
and payload (actual row data) size. For example, if your table has 1,000,000 rows and
3 indexes, and each row is about 200 bytes on average, you will need (24 + 3 * 8 + 200)
* 1,000,000 = ~236.5MB of memory to store row data without any versioning overhead
included in this number.
Hash indexes use 8 bytes per bucket. If a table has two hash indexes defined with 1,500,000
buckets each, SQL Server will create indexes with 2,097,152 buckets, rounding the number
of buckets specified in the index properties to next power of two. Those two indexes will use
2,097,152 * 2 * 8 = 32MB of memory.
Range indexes memory usage is based on the number of unique index keys and index key
size. If a table has a range index with 250,000 unique key values and each key value on
average uses 30 bytes, it would use (30 + 8(pointer)) * 250,000 = ~9MB of memory. You can
ignore the page header and non-leaf pages in your estimation as their sizes are insignificant
compared to leaf-level row size.
Row versioning memory estimation depends on the duration of the longest transactions
and the average number of data modifications (inserts and updates) per second.
For example, if some processes in a system have 10-second transactions and, on average,
the system handles 1,000 data modifications per second, you can estimate:
10 * 1,000 * 248(row size) = ~2.4MB of memory for row versioning storage.
Obviously, these numbers outline the minimally required amount of memory. You should factor in future growth
and changes in workload and reserve some additional memory just to be safe.
It is almost impossible to estimate the exact disk storage space required for in-memory OLTP data. It depends
on the workload, rate of change of the data, and frequency of CHECKPOINT and merge processes. As a general rule,
you should reserve at least 2-3 times more space on disk than the space used by data rows in-memory. Remember
that indexes do not take up any disk space, and they are recreated when the data is loaded into memory.
Summary
Project Hekaton, released as part of SQL Server 2014, Enterprise Edition, is the new latch- and lock-free in-memory
OLTP engine that provides exceptional throughput for OLTP workload. It is fully integrated into SQL Server 2014,
and it lets you store a subset of critical database tables in memory while keeping other tables on disk. You can access
in-memory data through the T-SQL interop engine or through natively-compiled stored procedures, which we will
discuss in the next chapter.
There are plenty of limitations in the first release of the in-memory OLTP engine. To name just a few,
memory-optimized tables support only a subset of SQL Server data types, rows cannot exceed 8,060 bytes, and no
off-row storage is supported. Indexed text columns should have BIN2 collations.
The in-memory OLTP engine supports two types of indexes. Hash indexes are useful for equality searches. Range
indexes support range searches. At most, a table can have eight indexes including the required unique primary key.
In-memory OLTP does not persist indexes on disk; they are recreated when data is loaded to the memory.
 
 
Search WWH ::




Custom Search