Databases Reference
In-Depth Information
How It Works
Specifying your ASM disk group attributes on Exadata is similar to non-Exadata Oracle ASM attributes, but on Exadata
there are a few attributes that bear special consideration.
The au_size attribute determines your ASM allocation unit size, which is essentially the ASM stripe size, and
governs how much data is written to one disk in an ASM disk group before continuing to the next disk. In Oracle
11gR2, Oracle defaults this AU_SIZE to 1 MB, but on Exadata, 4 MB is preferred and optimal for performance. When
Oracle allocates extents, it does so in units of this AU size. If we consider an example when 64 MB of extents are
required to be allocated for, say, an INSERT statement, it means 16 AUs will have data “copied to them” in 4 MB
increments. If the storage characteristics of the segment/tablespace are such that the extent sizes are less than 4 MB,
then (4 MB / extent size) worth of contiguous data is written to an ASM disk prior to moving to the next disk. If the
extent size of the table is larger than 4 MB, then extents will span AUs. So, let's consider an example when the AU size
is 4 MB and a tablespace's storage characteristics are such that the uniform extent length is 64 KB and 64 MB of data is
being inserted:
As data is being inserted, the first 64 KB goes into “AU #1”, which happens to be on disk
o/192.168.10.3/DATA_CD_00_cm01cel01
o/192.168.10.3/DATA_CD_00_cm01cel01
Since 64K < AU_SIZE, up to 64 extents are written to
(4MB / 64 KB = 64 extents)
th extent is required to be allocated, Oracle will, say, jump to o/192.168.10.5/
DATA_CD_00_cm01cel03 and write extents 65-128
When the 65
o/192.168.10.4/DATA_CD_00_cm01cel02
The next set of 64 extents will be written to, say,
This pattern continues until all 1000 64k extents are allocated
In this example, we randomly decided which physical disks to write extents to. In reality, ASM will balance
this over time and ensure an even distribution of extents among all data files. Please see Recipe 9-6 to learn how to
measure this.
Both the compatible.rdbms and compatible.asm attributes should be set to the version of the software you're
running in the RDBMS Oracle Home and Grid Infrastructure Oracle Home.
in cases where your grid infrastructure software is patched to a higher level than your rdBmS binaries, it is
possible to have a lower compatible.rdbms version. however, if you're patching your exadata database machine on a
regular basis, these will very likely be set to the same version and values.
Note
The cell.smart_scan_capable attribute should be set to TRUE if you wish to utilize Smart Scan for your ASM disk
group. There may be cases where you do not, but typically you should rely on your workload to dictate this and not
override Exadata's most powerful feature with an ASM disk group configuration.
Oracle ASM will drop grid disks from an ASM disk group if they remain offline greater than the value specified
by the disk_repair_time disk group attribute. With Exadata, this attribute specifies how long to wait before offlining
disks based on the cell disk being offline, not the actual grid disk—this behavior is slightly different on Exadata as
compared to non-Exadata ASM storage environments. When a physical disk goes offline due to disk failure, Exadata
will automatically and immediately drop grid disks from ASM disk groups based on its Pro-Active Disk Quarantine
functionality. However, if the entire cell goes offline, as is the case during cell server patching, ASM will wait for the
value specified by the disk_repair_time attribute before dropping grid disks.
 
 
Search WWH ::




Custom Search