Database Reference
In-Depth Information
technology that can direct application workloads to different levels of storage media to achieve the most suitable
performance and cost characteristics. As well as the redundant connection paths, a disk RAID configuration such as
RAID 10 or RAID 5 should be implemented to avoid any single point of failure at the disk level.
Using the configuration shown in Figure 5-4 , let's examine how multiple I/O paths are formed from a RAC
node to the storage servers. In an FC storage configuration, all the devices connected to the fibers such as HBAs and
storage controller ports are given a 64-bit identifier called World Wide Number (WWN). For example, the HBA's WWN
number in Linux can be found as follows:
$ more /sys/class/fc_host/host8/port_name
0x210000e08b923bd5
On the switch layer, a zone is configured to connect an HBA's WWN with a storage controller port's WWN. As
shown in Figure 5-4 , these WWNs are as follows:
1.
RAC host 1 has HBA1-1 and HBA1-2, which connect to FC switches SW1 and SW2,
respectively.
2.
RAC host 2 has HBA2-1 and HBA2-2, which connect to FC switches SW1 and SW2,
respectively.
3.
There are two FC controllers, FC1 and FC2, which are connected to FC switches SW1 and
SW2, respectively.
The storage zoning process is to create multiple independent physical I/O paths from RAC node hosts to the
storage through the FC switches to eliminate the single point of failure.
After zoning, each RAC host establishes multiple independent physical I/O paths to the SAN storage. For
example, RAC host 1 has four paths:
I/O Path1: HBA1-1, SW2, FC1
I/O Path2: HBA1-1, SW2, FC2
I/O Path3: HBA1-2, SW1, FC1
I/O Path4: HBA1-2, SW1, FC2
These redundant I/O paths give a host multiple independent ways to reach a storage volume. The paths using
different HBAs show up as different devices in the host (such as /dev/sda or /dev/sdc), even though these devices
point to the same volume. These devices share one thing in common, in that they all have the same SCSI ID. In the
next section, I will explain how to create a logical device that includes all the redundant I/O paths. Since this logical
device is supported by multiple independent I/O paths, the storage access to this volume is protected from
multiple-component failure up to a case when one HBA, one switch, and one controller all fail at the same time.
An FC SAN provides a highly reliable and high-performance storage solution for RAC Database. However, the
cost and complexity of FC components make it hard to adopt for many small and medium businesses. However, the
continuously improving speed of Ethernet and the low cost of its components has led to more adoption of the iSCSI
storage protocol. iSCSI SAN storage extends the traditional SCSI storage protocol by sending SCSI commands over IP on
Ethernet. This protocol can transfer data at a high speed for very long distances, especially by adding high-performance
features such as high-speed NICs with TCP/IP Offload engines (TOE), and switches with low-latency ports. The new 10g
GbE Ethernet allows iSCSI SAN storage to deliver even higher performance. Today, network bandwidths for both FC and
iSCSI are improving. FC has moved to 1 Gbps, 2 Gbps, 4 Gbps, and even 16 Gbps, and iSCSI is also moving from 1 GbE to
10 GbE. Both FC and iSCSI storage are able to delive storage performance good enough to meet enterprise database needs.
As shown in Figure 5-5 , iSCSI storage uses regular Ethernet to connect hosts and storage. For traditional 1GbE
Ethernet, it can use regular Ethernet network cards, cables, and switches for data transfer between servers and
storage. To design 10 GbE iSCSI storage SAN solution, you would have to make sure all the components support
10GbE Ethernet, including 10 GbE network adapter, high-speed cables, 10 GbE switches, and 10GbE storage
controllers. And of course, this configuration will raise the cost of iSCSI storage deployment.
 
Search WWH ::




Custom Search