Database Reference
In-Depth Information
a compression algorithm can be used to reduce the space occupied by a
large file, but this implies extra time for the decompression process.
2. Query-update trade-off: Access to data can be made more ecient
by imposing some structure upon it. However, the more elaborate the
structure, the more time is taken to build it and to maintain it when its
contents change. For example, sorting the records of a file according to
a key field allows them to be located more easily, but there is a greater
overhead upon insertions to keep the file sorted.
Further, once an initial physical design has been implemented, it is
necessary to monitor the system and to tune it as a result of the observed
performance and any changes in requirements. Many DBMSs provide utilities
to monitor and tune the operations of the system.
As the functionality provided by current DBMSs varies widely, physical
design requires one to know the various techniques for storing and finding
data that are implemented in the particular DBMS that will be used.
A database is organized on secondary storage into one or more files ,
where each file consists of one or several records and each record consists
of one or several fields . Typically, each tuple in a relation corresponds to a
record in a file. When a user requests a particular tuple, the DBMS maps
this logical record into a physical disk address and retrieves the record into
main memory using the file access routines of the operating system.
Data are stored on a computer disk in disk blocks (or pages )thatareset
by the operating system during disk formatting. Transfer of data between the
main memory and the disk and vice versa takes place in units of disk blocks.
DBMSs store data on database blocks (or pages ). One important aspect
of physical database design is the need to provide a good match between
disk blocks and database blocks, on which logical units such as tables and
records are stored. Most DBMSs provide the ability to specify a database
block size. The selection of a database block size depends on several issues.
For example, most DBMSs manage concurrent access to the records using
some kind of locking mechanism. If a record is locked by one transaction that
aims at modifying it, then no other transaction will be able to modify this
record (however, normally several transactions are able to read a record if
they do not try to write it). In some DBMSs, the finest locking granularity is
at the page level, not at the record level. Therefore, the larger the page size,
the larger the chance that two transactions will request access to entries on
the same page. On the other hand, for optimal disk eciency, the database
block size must be equal to or be a multiple of the disk block size.
DBMSs reserve a storage area in the main memory that holds several
database pages, which can be accessed for answering a query without reading
those pages from the disk. This area is called a buffer .Whenarequestis
issued to the database, the query processor checks if the required data records
are included in the pages already loaded in the buffer. If so, data are read
from the buffer and/or modified. In the latter case, the modified pages are
marked as such and eventually written back to the disk. If the pages needed to
Search WWH ::




Custom Search