Database Reference
In-Depth Information
the access methods required (for both transaction-based workloads and throughput-based workloads) in larger
environments, it is common to implement them as a mixed workload in smaller organizations to save on hardware
and administration costs.
When implementing mixed workloads in large database environments, the bandwidth of the storage arrays, the
disk controllers, and the HBA cards on all the servers must be taken into account. Read and write cache at the storage
level could be advantageous only for write operations due to the frequent changes in the database patterns. Server
configurations such as Oracle Database Machine have been designed for such environments.
Choosing the Storage Array
There are several factors to be taken into consideration when choosing one's storage array and adapters. A storage
array's internal architecture can vary between vendors and even between models. Choice of which storage array
normally depends on the type of storage communication protocol that is currently used in the organization, and could
be one of the following:
Fiber Channel (FC) or SCSI
SAS or SATA
Support of hardware RAID
Maximum storage bandwidth
Storage arrays typically have several front-end adapters. These front-end adapters have two sides. One side has
ports, which connect into the Fiber Channel switch. The other side connects into the cache contoller located inside of
the storage array.
The cache controller manages the storage-array cache. The storage arrays have a read and write cache area. Read
cache is used to store recently accessed data, and the write cache is used to buffer writes (write-back cache). The
read/write cache can range from 128GB to 2TB 1 for high-end arrays. In cases where the cache is mirrored, the capacity
would be half that.
When a read request for a block is received by the cache controller via the front-end port, the cache controller
checks for the buffer block in the cache. If the data buffer is found in the cache, the block is returned back up through
the fiber channel/network and eventually to the application.
The cache controller can also prefetch sequentially-accessed data blocks. If the cache controller determines that
the application is accessing contiguous disk blocks, the prefetch algorithms will be triggered (set with thresholds) to
prefetch the data to be stored in the read cache, providing significant improvement in access time.
Similar to the read-request operation discussed earlier, the write request is also received by the cache controller
via the front-end port; however, to the write-cache area, and later to the back-end disks. Most storage arrays have
a non-volatile random access memory (NVRAM) battery system that protects in-flight operations from the
loss of power.
Storage-Wide Considerations for Performance
The primary reason for performance issues is almost always poorly written SQL queries. Making the queries efficient
by optimizing the access paths could fix performance problems to a great extent. Similarly, on the storage subsystem
side of things, there could be high I/O activity, which could cause slower performance. The reasons for the slower
performance could be one of the following:
The throughput of the disks is low.
The SAN has not been optimally configured.
1 Configuration- EMC VMAX20K 2 Engines—128GB shared global cache, mirrored 64GB usable per engine.
 
Search WWH ::




Custom Search