Database Reference
In-Depth Information
You won't be using all the same measures as before but will instead rely on execution time. Even if you capture the
metrics through extended events, you're not going to see the same types of values as shown in Figure 23-3 ; however,
the reads in this case are a measure of the activity of the system, so you can anticipate that higher values mean more
access to the data and lower values mean less.
Figure 23-3. The Extended Events output of the SELECT query using in-memory tables shows duration, not reads
With the tables in place and proof of improved performance both for inserts and for selects, let's talk about the
indexes that you can use with in-memory tables and how they're different from standard indexes.
In-Memory Indexes
An in-memory table can have up to eight indexes created on it at one time. But, every memory-optimized table
must have at least one index. The index defined by the primary key counts. A durable table must have a primary key.
There are two basic index types that you can create: the nonclustered hash index that you used previously and the
nonclustered index. But these indexes are not the type of indexes that are created with standard tables. First, they're
maintained in-memory in the same way the in-memory tables are. Second, the same rules apply about durability of
the indexes as the in-memory tables.
Hash Index
A hash index is not a balanced-tree index that's just stored in memory. Instead, the hash index uses a predefined hash
bucket, or table, and hash values of the key to provide a mechanism for retrieving the data of a table. SQL Server has a
hash function that will always result in a constant hash value for the inputs provided. This means for a given key value,
you'll always have the same hash value. You can store multiple copies of the hash value in the hash bucket. Having a
hash value to retrieve a point lookup, a single row, makes for an extremely efficient operation, that is, as long as you
don't run into lots of hash collisions. This is when you have multiple values stored at the same location.
This means the key to getting the hash index right is getting the right distribution of values across buckets. You
do this by defining the bucket count for the index. For the first table I created, dbo.Address , I set a bucket count of
50,000. There are 19,000 rows currently in the table. So, with a bucket count of 50,000, I ensure that I have plenty of
storage for the existing set of values, and I provide a healthy growth overhead. You need to set the bucket count so that
it's big enough without being too big. If the bucket count is too small, you'll be storing lots of data within a bucket and
seriously impact the ability of the system to efficiently retrieve the data. In short, it's best to have your bucket be too
big. If you look at Figure 23-4 , you can see this laid out in a different way.
 
Search WWH ::




Custom Search