Information Technology Reference
In-Depth Information
.b; p/. Obviously, page-level mapping is flexible but requires a large amount of
memory space to store the mapping table.
A block-level mapping , on the contrary, needs less space to store the mapping
table, because only logical blocks are mapped. But, because the page number
components of page identifiers cannot be changed, an expensive copy-erase-write
operation sequence is needed when updating a valid page in a block: all the valid
pages in the block are fixed in the buffer, a clean block is allocated, the updated
page with all the other valid pages from the block are written to the erased block
into their (unchanged) page addresses, and the logical block is mapped to the new
block in the FTL .
Assume that there are n pages in a block. Writing pages sequentially into clean
locations in the blocks in page-number order incurs at most one page-write operation
and 1=n block-erase operations per page on the average. Updating a valid page
incurs one page-read operation, one block-erase operation, and one page-write
operation in the worst case, when a page-level mapping is used, but with a block-
level mapping, an update of a valid page may incur up to n 1 page reads, one block
erase, and n page writes.
Many FTL s adopt a hybrid approach by using a block-level mapping to manage
most blocks as “data blocks” and using a page-level mapping to manage a small
set of “log blocks,” which works as a buffer to accept incoming write requests
efficiently. The FTL mapping table is maintained in persistent flash memory and
rebuilt in volatile buffer memory at startup time.
To ensure that clean blocks are available when updated pages are to be flushed,
a garbage collector is run in the background, scanning flash-memory blocks and
recycling invalidated pages. If a page-level mapping is used, the valid pages in
the scanned block are copied out and condensed into a new block, the mapping is
adjusted, and the old locations are marked as invalid, thus obtaining a whole block
of invalidated pages that can be erased.
With the behavior described above, a flash-memory-based SSD is truly efficient
for permanent database storage only if most of the write operations are sequential
or if they can be turned into sequential writes using a large buffer from which
modified pages are flushed in large groups. Due to constant page relocation, physical
fragmentation of data is also a major problem with flash memory.
The transaction log, being append-only, is most suitable for implementation on
flash memory, because the pages in each block are written one by one, in the
sequential order of their addresses, and a page once written is never rewritten
and hence never relocated. At the other extreme, there are the conventional index
structures such as the B-tree with their frequent random in-place writes, which, if
implemented as such on flash memory, would incur constant page relocation, at
every update on a page, and thus also a big overhead in the maintenance of the FTL
mapping.
The database structures considered in the previous sections of this chapter have
features that make them more suitable than the conventional database structures for
implementation on flash memory. The distinctive property of the write-optimized
B-tree, that is, turning a set of random page writes into a single large sequential
Search WWH ::




Custom Search