Database Reference
In-Depth Information
Consider the situation where an application performs a search on the Name column, which uses case-insensitive
collation. You will need to convert all values to upper- or lower-case in order to be able to utilize a range index after
the table becomes memory-optimized.
It is also worth noting that using binary collations for data will lead to changes in the T-SQL code. You will need
to specify collations for variables in stored procedures and other T-SQL routines, unless you change the database
collation to be a binary one. However, if the database and server collations do not match, you will need to specify a
collation for the columns in temporary tables created in tempdb .
There are plenty of other factors to consider. However, the key point is that you should perform a thorough
analysis before starting a migration to in-memory OLTP. Such a migration can have a very significant cost impact, and
it should not be done unless it benefits the system.
SQL Server 2014 Management Studio provides a set of tools that can help you analyze if in-memory OLTP will
improve your application's performance. This tool is based on the Management Data Warehouse, and it provides
you with a set of data collectors and reports that can help identify the objects that would benefit the most from the
conversion. While this tool can be beneficial during the initial analysis stage, you should not make a decision based
solely on the tool's output. Take into account all of the other factors and considerations we have already discussed in
this chapter.
you can read about the in-memory oLtp arM tool at: http://msdn.microsoft.com/en-us/library/
dn205133.aspx .
Note
Moreover, SQL Server 2014 Management Studio provides you with Memory optimization and native Compilation advisors
that can help you analyze and convert specific tables and stored procedures to in-memory oLtp objects. you can read
more about them at: http://msdn.microsoft.com/en-us/library/dn284308.aspx and http://msdn.microsoft.
com/en-us/library/dn358355.aspx respectively.
New development, on the other hand, is a very different story. You can design a new system and database schema
taking in-memory OLTP limitations into account. It is also possible to adjust some functional requirements during the
design phase. As an example, it is much easier to store data in a case-sensitive way from the beginning as compared to
changing the behavior of existing systems after they are deployed to production.
You should remember, however, that in-memory OLTP is an Enterprise Edition feature, and it requires powerful
hardware with a large amount of memory. It is an expensive feature due to its licensing costs. Moreover, it is
impossible to “set it and forget it. Database professionals should actively participate in monitoring and tuning the
system after deployment. They need to monitor system memory usage, analyze data and recreate hash indexes if
bucket counts need to be changed, update statistics, redeploy natively-compiled stored procedures, and perform
other tasks as well.
All of that makes in-memory OLTP a bad choice for Independent Software Vendors who develop products
that need be deployed to a large number of customers. Moreover, it is not practical for supporting two versions of a
system—with and without in-memory OLTP—due to the increase in development and support costs.
Finally, if you are using the Enterprise Edition of SQL Server 2014, you can benefit from some of the in-memory
OLTP features—even if you decided that in-memory OLTP migration is not cost effective for your organization's
needs. You can use memory-optimized table variables and/or non-durable memory-optimized tables as a staging
area and for replacement of on-disk temporary tables. This will improve the performance of calculations and ETL
processes, which need to store a temporary copy of the data.
Another possibility is using memory-optimized tables as a session state storage for ASP.Net applications and/or
distributed cache for client applications, avoiding the purchase of expensive third-party solutions. You can use either
durable or non-durable tables in this scenario. Durable tables will provide you with transparent failover, while non-
durable tables will have incredibly fast performance. Obviously, you should remember the 8,060-byte maximum row
size limitation and address it in code if this becomes an issue in your system.
 
 
Search WWH ::




Custom Search