Database Reference
In-Depth Information
Internally attached 2.5-inch SSD do not play a major role in enterprise computing: the majority of these disks use
a SATA 6G interface and are made for high-end consumer desktops and graphic workstations. The next category of
internally attached flash memory is more interesting. Recent processors feature PCI Express version 3, offering a lot
more bandwidth at a reduced overhead compared to the previous PCI Express version 2.x.
a WOrD aBOUt pCI eXpreSS
pCi express, short for peripheral Component interconnect express (pCie), is the x86 world's standard way to
add functionality to a server which isn't already available on the mainboard. examples for such pCie cards are
10 gigabit ethernet cards, Fiber Channel host Bus adapters, infiniband cards, and the like. pCie has been
designed to replace older standards such as the accelerated graphics port and the older pCi-X and pCi standards.
unlike some of the standards it replaces, pCie is a high-speed point to point serial i/o bus.
when considering pCie bandwidth server, vendors often specify the number of lanes to a card slot. these lanes,
broadly speaking, equate to bandwidth. industry standard servers use pCie x4, x8, and x16 lanes for slots, most of
which are version 2. every processor or mainboard supports a certain maximum number of pCie lanes. the exact
number is usually available from the vendor website.
pCi express is currently is available in version 3. thanks to more efficient encoding, the protocol overhead could
be reduced compared to pCi version 2.x, and the net bitrate could be doubled.
pCie 3.0 has a transfer rate of eight giga-transfers per second (gt/s). Compared to 250 MB/s per pCi lane in
the initial pCie 1.0, pCie 3.0 has a bandwidth of 1 gB/s. with a pCie 3.0 x16 slot, a theoretical bandwidth of
16 gB/s is possible, which should be plenty for even the most demanding workloads. Most systems currently
deployed, however, still use pCie 2.x with exactly half the bandwidth of pCie 3.0: 500 MB/s per lane. the
number of cards supporting pCie 3.0 has yet to increase although that is almost certainly going to happen
while this topic is in print.
PCIe is possibly the best way to connect the current ultra-fast flash solutions so they are least slowed down by
hardware and additional protocol bottlenecks. Such cards use single-level cells (SLC) for best performance or
multi-level cells (MLC) for best storage capacity. According to vendor specifications, such devices have low
micro-second response times and offer hundreds of thousands of I/O operations per second. When it comes to the
fastest available storage, then PCIe x4 or x8 cards are hard to beat. The PCIe cards will show up as a storage device
under Linux and other supported operating systems, just like a LUN from a storage array making it simple to either
add it into an ASM disk group as an ASM disk or alternatively create a suitable file system, such as XFS on top of
the LUN. The downside to the use of PCIe flash memory is the fact that a number of these cards cannot easily be
configured for redundancy in hardware. PCIe cards are also not hot-swappable, requiring the server to be powered
off if a card needs to be replaced. Nor can they be shared between hosts (yet?), making them unsuitable for Oracle
configurations requiring shared storage.
Another use case for PCIe flash memory is to use a second-level buffer cache, a feature known as Database
Smart Flash Cache. With today's hardware taking terabytes of DRAM this solution should be carefully evaluated to
assess its use in applications and the new hardware platform to see if there is any benefit. Finally, some vendors
allow you to use the PCIe flash device as a write-through cache between database host and Fiber Channel attached
array. PCIe flash devices used in this way can speed up reads because those reads do not need to use the fiber
channel protocol to access data on the array. Since the flash device is write-through, failure of the card does not
impact data integrity.
External flash based storage solutions can be connected in a number of ways, with Fiber Channel probably the
most common option. Most established vendors of storage arrays offer a new storage tier inside the array based on
flash memory. For most customers, that approach is very easy to implement because it does not require investment
into new network infrastructure. It also integrates seamlessly into the existing fabrics, and the skillset of the storage
 
Search WWH ::




Custom Search