Database Reference
In-Depth Information
while @RC = 1
begin
select top 1 @ID = ID, @Value = Value
from @tblCursor
where ID > @ID
order by ID
select @RC = @@rowcount
if @RC = 1
begin
/* Row processing */
update dbo.MOData set Value = Value * 2 where ID = @ID
end
end
end
In-Memory OLTP: Implementation Considerations
As with any new technology, adoption of in-memory OLTP comes at a cost. You will need to acquire and/or upgrade
to SQL Server 2014, spend time learning the technology and, if you are updating an existing system, refactor code and
test the changes. It is important to perform a cost/benefits analysis and determine if in-memory OLTP provides you
with adequate benefits to outweigh the costs.
In-memory OLTP is hardly a magical solution, which can help you improve server performance by simply
flipping a switch and moving data into memory. It is designed to address a specific set of problems, such as latch and
lock contentions on very active OLTP systems. It is less beneficial in the case of Data Warehouse systems with low
concurrent activity, large amounts of data, and queries that require complex aggregations. While in some cases it is
still possible to achieve performance improvements by moving data into memory, you can often obtain better results
by implementing columnstore indexes, indexing views, data compression, and other database schema changes.
It is also worth remembering that most performance improvements are achieved by using natively-compiled
stored procedures, which can rarely be used in Data Warehouse workloads due to the limited set of T-SQL features
that they support.
Another important factor is whether you plan to use in-memory OLTP during the development of new or
the migration of existing systems. It is obvious that you need to make changes in existing systems, addressing the
limitations of memory-optimized tables, such as missing support of triggers, foreign key constraints, check and
unique constraints, calculated columns, and quite a few other restrictions.
I would like to discuss a few less obvious items that can greatly increase migration cost. The first is the 8,060-byte
maximum row size limitation in memory-optimized tables without any off-row data storage support. Such a
limitation can lead to a significant amount of work when the existing active OLTP tables use LOB data types, such as
(n)varchar(max) or xml . While it is possible to change the data types, limiting the size of the strings and/or store xml
as text or in binary format and/or store large objects in separate tables, such changes are complex, time-consuming,
and require careful planning, especially if the table has multiple LOB columns defined. Do not forget that in-memory
OLTP does not allow you to create a table if there is a possibility that the size of a row exceeds 8,060 bytes. For
example, you cannot create a table with three varchar(3000) columns.
Indexing of memory-optimizing tables is another important factor. While range indexes can mimic some of
the behavior of indexes in on-disk tables, there still is a significant difference between them. Range indexes are
implemented as a single-linked list, and they would not help much if the data needs to be accessed in the opposite
sorting order of an index key. This often requires you to reevaluate your index strategy when a table is moved from
disk into memory. However, the bigger issue with indexing is the requirement to have binary collation of the indexed
text columns. This is a breaking change in system behavior, and it often requires non-trivial changes in the code and
some sort of data conversion.
 
Search WWH ::




Custom Search