Information Technology Reference
In-Depth Information
DASD such as magnetic discs were made possible the direct retrieval of the re-
quired data record for immediate processing. However, the application program had
to first calculate the physical location of the data record on disc using an algorithm
that operated on an identifying key. The program that accessed the file had to be mod-
ified, when it became necessary to move the data file to another location on the disc.
Indexed sequential access method (ISAM) was developed to help isolate the
application programs from changes made to the location of the files on the DASD.
ISAM uses the record key to refer an intermediate index stored on the DASD to
locate the physical location of the record on the DASD; ISAM then retrieves this
record from the data file for presentation to the program. In many cases, applica-
tion programs need to access the data record by some identifying key other than the
existing indexed sequential key. To reduce some of this data file housekeeping by
the application program, generalized routines were written for accessing interre-
lated records via appropriate record pointers, and updating these pointers to reflect
changes in the associated record relationships (e.g., insertion or deletion of records).
These generalized routines were the precursors of today's database management
systems (DBMS).
1.1.3
Problems in Maintaining File Systems
The structures of conventional files restrict the efficiency and effectiveness of in-
formation system applications. For example, changes in the types of information
recorded in the files, such as to the addition of attributes to its record structure
would, at the very least, necessitate the recompilation of all applications accessing
the data. The application programs that refer to the changed record format may be
completely rewritten if modifying the program becomes more complex than com-
pletely rewriting it.
As more complex applications are developed, the number of data files referred by
these applications increases. Such proliferation of files means that a minor change
in either a data file or a program may snowball into a series of major program modi-
fications, and a maintenance nightmare.
Since the same data exists in several different files, programmers must also
maintain the data by updating all the files to ensure accuracy and consistency of the
stored data. In the event of master file corruption or incomplete processing due to
system or operational human errors, data processing practitioners must reprocess
the various batches of input data against an earlier version of the corrupted master
file for data recovery. Further complexity is added to the system to ensure that sensi-
tive data is accessed only by authorized personnel.
Last, such file-based systems do not support the requirements of management.
Very often, management needs ad hoc reports for decision-making, which requires
processing on multiple files in a very short time and adds the burden to file process-
ing systems.
Search WWH ::




Custom Search