Database Reference
In-Depth Information
2. Place part of the new record in the original block and the rest of the record
in the overflow block. The two parts of the records are chained together by
physical addresses.
You must already have realized the imposition of system overhead while writing
or reading when either the whole record or part of the record is placed in the over-
flow block. Block chaining and record chaining come with a price. Therefore, block
management becomes significant.
Block Size As you know, the file block is the unit of data transfer between disk
storage and main memory. The goal is to be able to transfer large volumes of data
from disk storage in a few I/O operations. How can you accomplish this? It would
seem that if you make the block size large, you fit more records in a block and there-
fore move a greater number of records in fewer I/O operations. Let us consider the
option of making the block size large.
First, make sure your operating system does not fix the block size. If the operat-
ing system supports a certain fixed block size and does not allow larger file blocks,
then this option of using large block sizes is not open to you. Usually, file block sizes
are allowed to be defined in multiples of the operating system block size. If this is
the case in your environment, take advantage of this option.
Each block contains a header area and a data area. The data area represents the
effective utilization of the block. The header has control information, and the size
of the header does not change even if the data area becomes larger. For a small
block, the header utilizes a larger percentage of the total block size. As a percent-
age of the total block size, the header utilizes a small percentage of a large block.
What are the effects of having large blocks?
Decrease in the number of I/O operations
Less space utilized by block headers as percentage of total block size
Too many unnecessary records retrieved in the full block even when only a few
records are requested for
You cannot arbitrarily keep on increasing the block size assuming that every
increase results in greater efficiency. If the data requests in your environment
typically require just a few records for each request, then very large block sizes result
in wasteful data retrievals into memory buffers. What will work is not the largest
block size, only the optimal block size. Determine the optimal block size for each
file on the basis of the record sizes and the data access patterns.
Block Usage Parameters Consider the storage and removal of records in a
block. As transactions happen in the database environment, new records are
added to a block, using up empty space in the block; records get deleted from
a block, freeing up space in the block; records change their lengths when they
are updated, resulting in either using up more space in the block or freeing up
some space in the block. DBMSs provide two methods for optimizing block usage.
In the various commercial DBMSs, these block usage parameters may go by
different names. Nevertheless, the purpose and function of each block usage
Search WWH ::




Custom Search